00:00:00.001 Started by upstream project "autotest-per-patch" build number 130528 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.073 Fetching changes from the remote Git repository 00:00:00.075 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.448 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.495 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.508 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:05.508 > git config core.sparsecheckout # timeout=10 00:00:05.520 > git read-tree -mu HEAD # timeout=10 00:00:05.536 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:05.554 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:05.554 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:05.624 [Pipeline] Start of Pipeline 00:00:05.634 [Pipeline] library 00:00:05.635 Loading library shm_lib@master 00:00:05.636 Library shm_lib@master is cached. Copying from home. 00:00:05.652 [Pipeline] node 00:00:05.698 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.700 [Pipeline] { 00:00:05.710 [Pipeline] catchError 00:00:05.711 [Pipeline] { 00:00:05.723 [Pipeline] wrap 00:00:05.731 [Pipeline] { 00:00:05.738 [Pipeline] stage 00:00:05.740 [Pipeline] { (Prologue) 00:00:05.953 [Pipeline] sh 00:00:06.261 + logger -p user.info -t JENKINS-CI 00:00:06.282 [Pipeline] echo 00:00:06.284 Node: CYP12 00:00:06.291 [Pipeline] sh 00:00:06.596 [Pipeline] setCustomBuildProperty 00:00:06.607 [Pipeline] echo 00:00:06.609 Cleanup processes 00:00:06.614 [Pipeline] sh 00:00:06.900 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.900 376845 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.916 [Pipeline] sh 00:00:07.206 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.206 ++ grep -v 'sudo pgrep' 00:00:07.206 ++ awk '{print $1}' 00:00:07.206 + sudo kill -9 00:00:07.206 + true 00:00:07.222 [Pipeline] cleanWs 00:00:07.234 [WS-CLEANUP] Deleting project workspace... 00:00:07.234 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.243 [WS-CLEANUP] done 00:00:07.248 [Pipeline] setCustomBuildProperty 00:00:07.261 [Pipeline] sh 00:00:07.544 + sudo git config --global --replace-all safe.directory '*' 00:00:07.632 [Pipeline] httpRequest 00:00:08.084 [Pipeline] echo 00:00:08.086 Sorcerer 10.211.164.101 is alive 00:00:08.095 [Pipeline] retry 00:00:08.097 [Pipeline] { 00:00:08.117 [Pipeline] httpRequest 00:00:08.124 HttpMethod: GET 00:00:08.126 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.126 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.134 Response Code: HTTP/1.1 200 OK 00:00:08.134 Success: Status code 200 is in the accepted range: 200,404 00:00:08.134 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:19.928 [Pipeline] } 00:00:19.943 [Pipeline] // retry 00:00:19.948 [Pipeline] sh 00:00:20.281 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:20.297 [Pipeline] httpRequest 00:00:20.914 [Pipeline] echo 00:00:20.916 Sorcerer 10.211.164.101 is alive 00:00:20.923 [Pipeline] retry 00:00:20.925 [Pipeline] { 00:00:20.938 [Pipeline] httpRequest 00:00:20.943 HttpMethod: GET 00:00:20.944 URL: http://10.211.164.101/packages/spdk_310cb0643856268b9d158523a156545c8ea3648f.tar.gz 00:00:20.944 Sending request to url: http://10.211.164.101/packages/spdk_310cb0643856268b9d158523a156545c8ea3648f.tar.gz 00:00:20.950 Response Code: HTTP/1.1 200 OK 00:00:20.950 Success: Status code 200 is in the accepted range: 200,404 00:00:20.951 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_310cb0643856268b9d158523a156545c8ea3648f.tar.gz 00:05:14.023 [Pipeline] } 00:05:14.039 [Pipeline] // retry 00:05:14.046 [Pipeline] sh 00:05:14.335 + tar --no-same-owner -xf spdk_310cb0643856268b9d158523a156545c8ea3648f.tar.gz 00:05:16.893 [Pipeline] sh 00:05:17.181 + git -C spdk log --oneline -n5 00:05:17.181 310cb0643 event: move struct spdk_lw_thread to internal header 00:05:17.181 0b219088f event: move function declarations to inside of extern "C" guard 00:05:17.181 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:05:17.181 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:05:17.181 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:05:17.193 [Pipeline] } 00:05:17.208 [Pipeline] // stage 00:05:17.217 [Pipeline] stage 00:05:17.219 [Pipeline] { (Prepare) 00:05:17.237 [Pipeline] writeFile 00:05:17.252 [Pipeline] sh 00:05:17.539 + logger -p user.info -t JENKINS-CI 00:05:17.552 [Pipeline] sh 00:05:17.840 + logger -p user.info -t JENKINS-CI 00:05:17.854 [Pipeline] sh 00:05:18.143 + cat autorun-spdk.conf 00:05:18.143 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:18.143 SPDK_TEST_NVMF=1 00:05:18.143 SPDK_TEST_NVME_CLI=1 00:05:18.143 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:18.143 SPDK_TEST_NVMF_NICS=e810 00:05:18.143 SPDK_TEST_VFIOUSER=1 00:05:18.143 SPDK_RUN_UBSAN=1 00:05:18.143 NET_TYPE=phy 00:05:18.152 RUN_NIGHTLY=0 00:05:18.156 [Pipeline] readFile 00:05:18.181 [Pipeline] withEnv 00:05:18.183 [Pipeline] { 00:05:18.196 [Pipeline] sh 00:05:18.488 + set -ex 00:05:18.488 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:18.488 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:18.488 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:18.488 ++ SPDK_TEST_NVMF=1 00:05:18.488 ++ SPDK_TEST_NVME_CLI=1 00:05:18.488 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:18.488 ++ SPDK_TEST_NVMF_NICS=e810 00:05:18.488 ++ SPDK_TEST_VFIOUSER=1 00:05:18.488 ++ SPDK_RUN_UBSAN=1 00:05:18.488 ++ NET_TYPE=phy 00:05:18.488 ++ RUN_NIGHTLY=0 00:05:18.488 + case $SPDK_TEST_NVMF_NICS in 00:05:18.488 + DRIVERS=ice 00:05:18.488 + [[ tcp == \r\d\m\a ]] 00:05:18.488 + [[ -n ice ]] 00:05:18.488 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:18.488 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:18.488 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:18.488 rmmod: ERROR: Module irdma is not currently loaded 00:05:18.488 rmmod: ERROR: Module i40iw is not currently loaded 00:05:18.488 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:18.488 + true 00:05:18.488 + for D in $DRIVERS 00:05:18.488 + sudo modprobe ice 00:05:18.488 + exit 0 00:05:18.499 [Pipeline] } 00:05:18.513 [Pipeline] // withEnv 00:05:18.518 [Pipeline] } 00:05:18.531 [Pipeline] // stage 00:05:18.540 [Pipeline] catchError 00:05:18.542 [Pipeline] { 00:05:18.555 [Pipeline] timeout 00:05:18.555 Timeout set to expire in 1 hr 0 min 00:05:18.557 [Pipeline] { 00:05:18.571 [Pipeline] stage 00:05:18.573 [Pipeline] { (Tests) 00:05:18.588 [Pipeline] sh 00:05:18.878 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:18.878 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:18.878 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:18.878 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:18.878 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.878 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:18.878 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:18.878 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:18.878 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:18.878 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:18.878 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:18.878 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:18.878 + source /etc/os-release 00:05:18.878 ++ NAME='Fedora Linux' 00:05:18.878 ++ VERSION='39 (Cloud Edition)' 00:05:18.878 ++ ID=fedora 00:05:18.878 ++ VERSION_ID=39 00:05:18.878 ++ VERSION_CODENAME= 00:05:18.878 ++ PLATFORM_ID=platform:f39 00:05:18.878 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:18.878 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:18.878 ++ LOGO=fedora-logo-icon 00:05:18.878 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:18.878 ++ HOME_URL=https://fedoraproject.org/ 00:05:18.878 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:18.878 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:18.878 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:18.878 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:18.878 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:18.878 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:18.878 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:18.878 ++ SUPPORT_END=2024-11-12 00:05:18.878 ++ VARIANT='Cloud Edition' 00:05:18.878 ++ VARIANT_ID=cloud 00:05:18.878 + uname -a 00:05:18.878 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:18.878 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:22.244 Hugepages 00:05:22.244 node hugesize free / total 00:05:22.244 node0 1048576kB 0 / 0 00:05:22.244 node0 2048kB 0 / 0 00:05:22.244 node1 1048576kB 0 / 0 00:05:22.244 node1 2048kB 0 / 0 00:05:22.244 00:05:22.244 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.244 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:22.244 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:22.244 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:22.244 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:22.244 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:22.244 + rm -f /tmp/spdk-ld-path 00:05:22.244 + source autorun-spdk.conf 00:05:22.244 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:22.244 ++ SPDK_TEST_NVMF=1 00:05:22.244 ++ SPDK_TEST_NVME_CLI=1 00:05:22.244 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:22.244 ++ SPDK_TEST_NVMF_NICS=e810 00:05:22.244 ++ SPDK_TEST_VFIOUSER=1 00:05:22.244 ++ SPDK_RUN_UBSAN=1 00:05:22.244 ++ NET_TYPE=phy 00:05:22.244 ++ RUN_NIGHTLY=0 00:05:22.244 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:22.244 + [[ -n '' ]] 00:05:22.244 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:22.244 + for M in /var/spdk/build-*-manifest.txt 00:05:22.244 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:22.244 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:22.244 + for M in /var/spdk/build-*-manifest.txt 00:05:22.244 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:22.244 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:22.244 + for M in /var/spdk/build-*-manifest.txt 00:05:22.244 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:22.244 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:22.244 ++ uname 00:05:22.244 + [[ Linux == \L\i\n\u\x ]] 00:05:22.244 + sudo dmesg -T 00:05:22.244 + sudo dmesg --clear 00:05:22.244 + dmesg_pid=378983 00:05:22.244 + [[ Fedora Linux == FreeBSD ]] 00:05:22.244 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:22.244 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:22.244 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:22.244 + [[ -x /usr/src/fio-static/fio ]] 00:05:22.244 + export FIO_BIN=/usr/src/fio-static/fio 00:05:22.244 + FIO_BIN=/usr/src/fio-static/fio 00:05:22.244 + sudo dmesg -Tw 00:05:22.244 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:22.244 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:22.244 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:22.244 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:22.244 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:22.244 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:22.244 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:22.245 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:22.245 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:22.245 Test configuration: 00:05:22.245 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:22.245 SPDK_TEST_NVMF=1 00:05:22.245 SPDK_TEST_NVME_CLI=1 00:05:22.245 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:22.245 SPDK_TEST_NVMF_NICS=e810 00:05:22.245 SPDK_TEST_VFIOUSER=1 00:05:22.245 SPDK_RUN_UBSAN=1 00:05:22.245 NET_TYPE=phy 00:05:22.510 RUN_NIGHTLY=0 22:33:49 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:05:22.510 22:33:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.510 22:33:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:22.510 22:33:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:22.510 22:33:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.510 22:33:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.510 22:33:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.510 22:33:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.510 22:33:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.510 22:33:49 -- paths/export.sh@5 -- $ export PATH 00:05:22.510 22:33:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.510 22:33:49 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:22.510 22:33:49 -- common/autobuild_common.sh@479 -- $ date +%s 00:05:22.510 22:33:49 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727728429.XXXXXX 00:05:22.510 22:33:49 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727728429.YrKhpC 00:05:22.510 22:33:49 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:05:22.510 22:33:49 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:05:22.510 22:33:49 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:22.510 22:33:49 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:22.510 22:33:49 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:22.510 22:33:49 -- common/autobuild_common.sh@495 -- $ get_config_params 00:05:22.510 22:33:49 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:22.510 22:33:49 -- common/autotest_common.sh@10 -- $ set +x 00:05:22.510 22:33:49 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:22.510 22:33:49 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:05:22.510 22:33:49 -- pm/common@17 -- $ local monitor 00:05:22.510 22:33:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.510 22:33:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.510 22:33:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.510 22:33:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.510 22:33:49 -- pm/common@21 -- $ date +%s 00:05:22.510 22:33:49 -- pm/common@21 -- $ date +%s 00:05:22.510 22:33:49 -- pm/common@25 -- $ sleep 1 00:05:22.510 22:33:49 -- pm/common@21 -- $ date +%s 00:05:22.510 22:33:49 -- pm/common@21 -- $ date +%s 00:05:22.510 22:33:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727728429 00:05:22.510 22:33:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727728429 00:05:22.510 22:33:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727728429 00:05:22.510 22:33:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727728429 00:05:22.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727728429_collect-cpu-load.pm.log 00:05:22.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727728429_collect-vmstat.pm.log 00:05:22.511 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727728429_collect-cpu-temp.pm.log 00:05:22.511 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727728429_collect-bmc-pm.bmc.pm.log 00:05:23.458 22:33:50 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:05:23.458 22:33:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:23.458 22:33:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:23.458 22:33:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.458 22:33:50 -- spdk/autobuild.sh@16 -- $ date -u 00:05:23.458 Mon Sep 30 08:33:50 PM UTC 2024 00:05:23.458 22:33:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:23.458 v25.01-pre-19-g310cb0643 00:05:23.458 22:33:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:23.458 22:33:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:23.458 22:33:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:23.458 22:33:50 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:23.458 22:33:50 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:23.458 22:33:50 -- common/autotest_common.sh@10 -- $ set +x 00:05:23.458 ************************************ 00:05:23.458 START TEST ubsan 00:05:23.458 ************************************ 00:05:23.458 22:33:50 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:05:23.458 using ubsan 00:05:23.458 00:05:23.458 real 0m0.001s 00:05:23.458 user 0m0.001s 00:05:23.458 sys 0m0.000s 00:05:23.458 22:33:50 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:23.458 22:33:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:23.458 ************************************ 00:05:23.458 END TEST ubsan 00:05:23.458 ************************************ 00:05:23.458 22:33:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:23.458 22:33:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:23.458 22:33:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:23.458 22:33:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:23.719 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:23.719 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:23.980 Using 'verbs' RDMA provider 00:05:39.833 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:52.092 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:52.613 Creating mk/config.mk...done. 00:05:52.613 Creating mk/cc.flags.mk...done. 00:05:52.613 Type 'make' to build. 00:05:52.613 22:34:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:05:52.613 22:34:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:52.613 22:34:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:52.613 22:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:05:52.613 ************************************ 00:05:52.613 START TEST make 00:05:52.613 ************************************ 00:05:52.613 22:34:19 make -- common/autotest_common.sh@1125 -- $ make -j144 00:05:53.186 make[1]: Nothing to be done for 'all'. 00:05:54.574 The Meson build system 00:05:54.574 Version: 1.5.0 00:05:54.574 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:54.574 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:54.574 Build type: native build 00:05:54.574 Project name: libvfio-user 00:05:54.574 Project version: 0.0.1 00:05:54.574 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:54.574 C linker for the host machine: cc ld.bfd 2.40-14 00:05:54.574 Host machine cpu family: x86_64 00:05:54.574 Host machine cpu: x86_64 00:05:54.574 Run-time dependency threads found: YES 00:05:54.574 Library dl found: YES 00:05:54.574 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:54.574 Run-time dependency json-c found: YES 0.17 00:05:54.574 Run-time dependency cmocka found: YES 1.1.7 00:05:54.574 Program pytest-3 found: NO 00:05:54.574 Program flake8 found: NO 00:05:54.574 Program misspell-fixer found: NO 00:05:54.574 Program restructuredtext-lint found: NO 00:05:54.574 Program valgrind found: YES (/usr/bin/valgrind) 00:05:54.574 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:54.574 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:54.574 Compiler for C supports arguments -Wwrite-strings: YES 00:05:54.574 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:54.574 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:54.574 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:54.574 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:54.574 Build targets in project: 8 00:05:54.574 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:54.574 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:54.574 00:05:54.574 libvfio-user 0.0.1 00:05:54.574 00:05:54.574 User defined options 00:05:54.574 buildtype : debug 00:05:54.574 default_library: shared 00:05:54.574 libdir : /usr/local/lib 00:05:54.574 00:05:54.574 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:55.145 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:55.145 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:55.145 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:55.145 [3/37] Compiling C object samples/null.p/null.c.o 00:05:55.145 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:55.145 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:55.145 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:55.145 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:55.145 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:55.145 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:55.145 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:55.145 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:55.145 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:55.145 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:55.146 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:55.146 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:55.146 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:55.146 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:55.146 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:55.146 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:55.146 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:55.146 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:55.146 [22/37] Compiling C object samples/server.p/server.c.o 00:05:55.146 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:55.405 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:55.405 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:55.405 [26/37] Compiling C object samples/client.p/client.c.o 00:05:55.405 [27/37] Linking target samples/client 00:05:55.405 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:55.405 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:55.405 [30/37] Linking target test/unit_tests 00:05:55.405 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:05:55.665 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:55.665 [33/37] Linking target samples/null 00:05:55.665 [34/37] Linking target samples/shadow_ioeventfd_server 00:05:55.665 [35/37] Linking target samples/server 00:05:55.665 [36/37] Linking target samples/gpio-pci-idio-16 00:05:55.665 [37/37] Linking target samples/lspci 00:05:55.665 INFO: autodetecting backend as ninja 00:05:55.665 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:55.665 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:55.925 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:55.925 ninja: no work to do. 00:06:02.521 The Meson build system 00:06:02.521 Version: 1.5.0 00:06:02.521 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:02.521 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:02.521 Build type: native build 00:06:02.521 Program cat found: YES (/usr/bin/cat) 00:06:02.521 Project name: DPDK 00:06:02.521 Project version: 24.03.0 00:06:02.521 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:02.521 C linker for the host machine: cc ld.bfd 2.40-14 00:06:02.521 Host machine cpu family: x86_64 00:06:02.521 Host machine cpu: x86_64 00:06:02.521 Message: ## Building in Developer Mode ## 00:06:02.521 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:02.521 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:02.521 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:02.521 Program python3 found: YES (/usr/bin/python3) 00:06:02.521 Program cat found: YES (/usr/bin/cat) 00:06:02.521 Compiler for C supports arguments -march=native: YES 00:06:02.521 Checking for size of "void *" : 8 00:06:02.521 Checking for size of "void *" : 8 (cached) 00:06:02.521 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:02.521 Library m found: YES 00:06:02.521 Library numa found: YES 00:06:02.521 Has header "numaif.h" : YES 00:06:02.521 Library fdt found: NO 00:06:02.521 Library execinfo found: NO 00:06:02.521 Has header "execinfo.h" : YES 00:06:02.521 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:02.521 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:02.521 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:02.521 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:02.521 Run-time dependency openssl found: YES 3.1.1 00:06:02.521 Run-time dependency libpcap found: YES 1.10.4 00:06:02.521 Has header "pcap.h" with dependency libpcap: YES 00:06:02.521 Compiler for C supports arguments -Wcast-qual: YES 00:06:02.521 Compiler for C supports arguments -Wdeprecated: YES 00:06:02.521 Compiler for C supports arguments -Wformat: YES 00:06:02.521 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:02.521 Compiler for C supports arguments -Wformat-security: NO 00:06:02.521 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:02.521 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:02.521 Compiler for C supports arguments -Wnested-externs: YES 00:06:02.521 Compiler for C supports arguments -Wold-style-definition: YES 00:06:02.521 Compiler for C supports arguments -Wpointer-arith: YES 00:06:02.521 Compiler for C supports arguments -Wsign-compare: YES 00:06:02.521 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:02.521 Compiler for C supports arguments -Wundef: YES 00:06:02.521 Compiler for C supports arguments -Wwrite-strings: YES 00:06:02.521 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:02.521 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:02.521 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:02.521 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:02.521 Program objdump found: YES (/usr/bin/objdump) 00:06:02.521 Compiler for C supports arguments -mavx512f: YES 00:06:02.521 Checking if "AVX512 checking" compiles: YES 00:06:02.521 Fetching value of define "__SSE4_2__" : 1 00:06:02.521 Fetching value of define "__AES__" : 1 00:06:02.521 Fetching value of define "__AVX__" : 1 00:06:02.521 Fetching value of define "__AVX2__" : 1 00:06:02.521 Fetching value of define "__AVX512BW__" : 1 00:06:02.521 Fetching value of define "__AVX512CD__" : 1 00:06:02.521 Fetching value of define "__AVX512DQ__" : 1 00:06:02.521 Fetching value of define "__AVX512F__" : 1 00:06:02.521 Fetching value of define "__AVX512VL__" : 1 00:06:02.521 Fetching value of define "__PCLMUL__" : 1 00:06:02.521 Fetching value of define "__RDRND__" : 1 00:06:02.521 Fetching value of define "__RDSEED__" : 1 00:06:02.521 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:02.521 Fetching value of define "__znver1__" : (undefined) 00:06:02.521 Fetching value of define "__znver2__" : (undefined) 00:06:02.521 Fetching value of define "__znver3__" : (undefined) 00:06:02.521 Fetching value of define "__znver4__" : (undefined) 00:06:02.521 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:02.521 Message: lib/log: Defining dependency "log" 00:06:02.521 Message: lib/kvargs: Defining dependency "kvargs" 00:06:02.521 Message: lib/telemetry: Defining dependency "telemetry" 00:06:02.521 Checking for function "getentropy" : NO 00:06:02.521 Message: lib/eal: Defining dependency "eal" 00:06:02.521 Message: lib/ring: Defining dependency "ring" 00:06:02.521 Message: lib/rcu: Defining dependency "rcu" 00:06:02.521 Message: lib/mempool: Defining dependency "mempool" 00:06:02.521 Message: lib/mbuf: Defining dependency "mbuf" 00:06:02.521 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:02.521 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:02.521 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:02.521 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:02.521 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:02.521 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:02.521 Compiler for C supports arguments -mpclmul: YES 00:06:02.521 Compiler for C supports arguments -maes: YES 00:06:02.521 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:02.521 Compiler for C supports arguments -mavx512bw: YES 00:06:02.521 Compiler for C supports arguments -mavx512dq: YES 00:06:02.521 Compiler for C supports arguments -mavx512vl: YES 00:06:02.521 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:02.521 Compiler for C supports arguments -mavx2: YES 00:06:02.521 Compiler for C supports arguments -mavx: YES 00:06:02.521 Message: lib/net: Defining dependency "net" 00:06:02.521 Message: lib/meter: Defining dependency "meter" 00:06:02.521 Message: lib/ethdev: Defining dependency "ethdev" 00:06:02.521 Message: lib/pci: Defining dependency "pci" 00:06:02.521 Message: lib/cmdline: Defining dependency "cmdline" 00:06:02.521 Message: lib/hash: Defining dependency "hash" 00:06:02.521 Message: lib/timer: Defining dependency "timer" 00:06:02.521 Message: lib/compressdev: Defining dependency "compressdev" 00:06:02.521 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:02.521 Message: lib/dmadev: Defining dependency "dmadev" 00:06:02.521 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:02.521 Message: lib/power: Defining dependency "power" 00:06:02.521 Message: lib/reorder: Defining dependency "reorder" 00:06:02.521 Message: lib/security: Defining dependency "security" 00:06:02.521 Has header "linux/userfaultfd.h" : YES 00:06:02.521 Has header "linux/vduse.h" : YES 00:06:02.521 Message: lib/vhost: Defining dependency "vhost" 00:06:02.521 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:02.521 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:02.521 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:02.521 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:02.521 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:02.521 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:02.521 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:02.521 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:02.521 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:02.522 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:02.522 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:02.522 Configuring doxy-api-html.conf using configuration 00:06:02.522 Configuring doxy-api-man.conf using configuration 00:06:02.522 Program mandb found: YES (/usr/bin/mandb) 00:06:02.522 Program sphinx-build found: NO 00:06:02.522 Configuring rte_build_config.h using configuration 00:06:02.522 Message: 00:06:02.522 ================= 00:06:02.522 Applications Enabled 00:06:02.522 ================= 00:06:02.522 00:06:02.522 apps: 00:06:02.522 00:06:02.522 00:06:02.522 Message: 00:06:02.522 ================= 00:06:02.522 Libraries Enabled 00:06:02.522 ================= 00:06:02.522 00:06:02.522 libs: 00:06:02.522 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:02.522 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:02.522 cryptodev, dmadev, power, reorder, security, vhost, 00:06:02.522 00:06:02.522 Message: 00:06:02.522 =============== 00:06:02.522 Drivers Enabled 00:06:02.522 =============== 00:06:02.522 00:06:02.522 common: 00:06:02.522 00:06:02.522 bus: 00:06:02.522 pci, vdev, 00:06:02.522 mempool: 00:06:02.522 ring, 00:06:02.522 dma: 00:06:02.522 00:06:02.522 net: 00:06:02.522 00:06:02.522 crypto: 00:06:02.522 00:06:02.522 compress: 00:06:02.522 00:06:02.522 vdpa: 00:06:02.522 00:06:02.522 00:06:02.522 Message: 00:06:02.522 ================= 00:06:02.522 Content Skipped 00:06:02.522 ================= 00:06:02.522 00:06:02.522 apps: 00:06:02.522 dumpcap: explicitly disabled via build config 00:06:02.522 graph: explicitly disabled via build config 00:06:02.522 pdump: explicitly disabled via build config 00:06:02.522 proc-info: explicitly disabled via build config 00:06:02.522 test-acl: explicitly disabled via build config 00:06:02.522 test-bbdev: explicitly disabled via build config 00:06:02.522 test-cmdline: explicitly disabled via build config 00:06:02.522 test-compress-perf: explicitly disabled via build config 00:06:02.522 test-crypto-perf: explicitly disabled via build config 00:06:02.522 test-dma-perf: explicitly disabled via build config 00:06:02.522 test-eventdev: explicitly disabled via build config 00:06:02.522 test-fib: explicitly disabled via build config 00:06:02.522 test-flow-perf: explicitly disabled via build config 00:06:02.522 test-gpudev: explicitly disabled via build config 00:06:02.522 test-mldev: explicitly disabled via build config 00:06:02.522 test-pipeline: explicitly disabled via build config 00:06:02.522 test-pmd: explicitly disabled via build config 00:06:02.522 test-regex: explicitly disabled via build config 00:06:02.522 test-sad: explicitly disabled via build config 00:06:02.522 test-security-perf: explicitly disabled via build config 00:06:02.522 00:06:02.522 libs: 00:06:02.522 argparse: explicitly disabled via build config 00:06:02.522 metrics: explicitly disabled via build config 00:06:02.522 acl: explicitly disabled via build config 00:06:02.522 bbdev: explicitly disabled via build config 00:06:02.522 bitratestats: explicitly disabled via build config 00:06:02.522 bpf: explicitly disabled via build config 00:06:02.522 cfgfile: explicitly disabled via build config 00:06:02.522 distributor: explicitly disabled via build config 00:06:02.522 efd: explicitly disabled via build config 00:06:02.522 eventdev: explicitly disabled via build config 00:06:02.522 dispatcher: explicitly disabled via build config 00:06:02.522 gpudev: explicitly disabled via build config 00:06:02.522 gro: explicitly disabled via build config 00:06:02.522 gso: explicitly disabled via build config 00:06:02.522 ip_frag: explicitly disabled via build config 00:06:02.522 jobstats: explicitly disabled via build config 00:06:02.522 latencystats: explicitly disabled via build config 00:06:02.522 lpm: explicitly disabled via build config 00:06:02.522 member: explicitly disabled via build config 00:06:02.522 pcapng: explicitly disabled via build config 00:06:02.522 rawdev: explicitly disabled via build config 00:06:02.522 regexdev: explicitly disabled via build config 00:06:02.522 mldev: explicitly disabled via build config 00:06:02.522 rib: explicitly disabled via build config 00:06:02.522 sched: explicitly disabled via build config 00:06:02.522 stack: explicitly disabled via build config 00:06:02.522 ipsec: explicitly disabled via build config 00:06:02.522 pdcp: explicitly disabled via build config 00:06:02.522 fib: explicitly disabled via build config 00:06:02.522 port: explicitly disabled via build config 00:06:02.522 pdump: explicitly disabled via build config 00:06:02.522 table: explicitly disabled via build config 00:06:02.522 pipeline: explicitly disabled via build config 00:06:02.522 graph: explicitly disabled via build config 00:06:02.522 node: explicitly disabled via build config 00:06:02.522 00:06:02.522 drivers: 00:06:02.522 common/cpt: not in enabled drivers build config 00:06:02.522 common/dpaax: not in enabled drivers build config 00:06:02.522 common/iavf: not in enabled drivers build config 00:06:02.522 common/idpf: not in enabled drivers build config 00:06:02.522 common/ionic: not in enabled drivers build config 00:06:02.522 common/mvep: not in enabled drivers build config 00:06:02.522 common/octeontx: not in enabled drivers build config 00:06:02.522 bus/auxiliary: not in enabled drivers build config 00:06:02.522 bus/cdx: not in enabled drivers build config 00:06:02.522 bus/dpaa: not in enabled drivers build config 00:06:02.522 bus/fslmc: not in enabled drivers build config 00:06:02.522 bus/ifpga: not in enabled drivers build config 00:06:02.522 bus/platform: not in enabled drivers build config 00:06:02.522 bus/uacce: not in enabled drivers build config 00:06:02.522 bus/vmbus: not in enabled drivers build config 00:06:02.522 common/cnxk: not in enabled drivers build config 00:06:02.522 common/mlx5: not in enabled drivers build config 00:06:02.522 common/nfp: not in enabled drivers build config 00:06:02.522 common/nitrox: not in enabled drivers build config 00:06:02.522 common/qat: not in enabled drivers build config 00:06:02.522 common/sfc_efx: not in enabled drivers build config 00:06:02.522 mempool/bucket: not in enabled drivers build config 00:06:02.522 mempool/cnxk: not in enabled drivers build config 00:06:02.522 mempool/dpaa: not in enabled drivers build config 00:06:02.522 mempool/dpaa2: not in enabled drivers build config 00:06:02.522 mempool/octeontx: not in enabled drivers build config 00:06:02.522 mempool/stack: not in enabled drivers build config 00:06:02.522 dma/cnxk: not in enabled drivers build config 00:06:02.522 dma/dpaa: not in enabled drivers build config 00:06:02.522 dma/dpaa2: not in enabled drivers build config 00:06:02.522 dma/hisilicon: not in enabled drivers build config 00:06:02.522 dma/idxd: not in enabled drivers build config 00:06:02.522 dma/ioat: not in enabled drivers build config 00:06:02.522 dma/skeleton: not in enabled drivers build config 00:06:02.522 net/af_packet: not in enabled drivers build config 00:06:02.522 net/af_xdp: not in enabled drivers build config 00:06:02.522 net/ark: not in enabled drivers build config 00:06:02.522 net/atlantic: not in enabled drivers build config 00:06:02.522 net/avp: not in enabled drivers build config 00:06:02.522 net/axgbe: not in enabled drivers build config 00:06:02.522 net/bnx2x: not in enabled drivers build config 00:06:02.522 net/bnxt: not in enabled drivers build config 00:06:02.522 net/bonding: not in enabled drivers build config 00:06:02.522 net/cnxk: not in enabled drivers build config 00:06:02.522 net/cpfl: not in enabled drivers build config 00:06:02.522 net/cxgbe: not in enabled drivers build config 00:06:02.522 net/dpaa: not in enabled drivers build config 00:06:02.522 net/dpaa2: not in enabled drivers build config 00:06:02.522 net/e1000: not in enabled drivers build config 00:06:02.522 net/ena: not in enabled drivers build config 00:06:02.522 net/enetc: not in enabled drivers build config 00:06:02.522 net/enetfec: not in enabled drivers build config 00:06:02.522 net/enic: not in enabled drivers build config 00:06:02.522 net/failsafe: not in enabled drivers build config 00:06:02.522 net/fm10k: not in enabled drivers build config 00:06:02.522 net/gve: not in enabled drivers build config 00:06:02.522 net/hinic: not in enabled drivers build config 00:06:02.522 net/hns3: not in enabled drivers build config 00:06:02.522 net/i40e: not in enabled drivers build config 00:06:02.522 net/iavf: not in enabled drivers build config 00:06:02.522 net/ice: not in enabled drivers build config 00:06:02.522 net/idpf: not in enabled drivers build config 00:06:02.522 net/igc: not in enabled drivers build config 00:06:02.522 net/ionic: not in enabled drivers build config 00:06:02.522 net/ipn3ke: not in enabled drivers build config 00:06:02.522 net/ixgbe: not in enabled drivers build config 00:06:02.522 net/mana: not in enabled drivers build config 00:06:02.522 net/memif: not in enabled drivers build config 00:06:02.522 net/mlx4: not in enabled drivers build config 00:06:02.522 net/mlx5: not in enabled drivers build config 00:06:02.522 net/mvneta: not in enabled drivers build config 00:06:02.522 net/mvpp2: not in enabled drivers build config 00:06:02.522 net/netvsc: not in enabled drivers build config 00:06:02.522 net/nfb: not in enabled drivers build config 00:06:02.522 net/nfp: not in enabled drivers build config 00:06:02.522 net/ngbe: not in enabled drivers build config 00:06:02.522 net/null: not in enabled drivers build config 00:06:02.522 net/octeontx: not in enabled drivers build config 00:06:02.522 net/octeon_ep: not in enabled drivers build config 00:06:02.522 net/pcap: not in enabled drivers build config 00:06:02.522 net/pfe: not in enabled drivers build config 00:06:02.522 net/qede: not in enabled drivers build config 00:06:02.522 net/ring: not in enabled drivers build config 00:06:02.522 net/sfc: not in enabled drivers build config 00:06:02.522 net/softnic: not in enabled drivers build config 00:06:02.522 net/tap: not in enabled drivers build config 00:06:02.522 net/thunderx: not in enabled drivers build config 00:06:02.522 net/txgbe: not in enabled drivers build config 00:06:02.522 net/vdev_netvsc: not in enabled drivers build config 00:06:02.522 net/vhost: not in enabled drivers build config 00:06:02.522 net/virtio: not in enabled drivers build config 00:06:02.522 net/vmxnet3: not in enabled drivers build config 00:06:02.522 raw/*: missing internal dependency, "rawdev" 00:06:02.522 crypto/armv8: not in enabled drivers build config 00:06:02.522 crypto/bcmfs: not in enabled drivers build config 00:06:02.522 crypto/caam_jr: not in enabled drivers build config 00:06:02.523 crypto/ccp: not in enabled drivers build config 00:06:02.523 crypto/cnxk: not in enabled drivers build config 00:06:02.523 crypto/dpaa_sec: not in enabled drivers build config 00:06:02.523 crypto/dpaa2_sec: not in enabled drivers build config 00:06:02.523 crypto/ipsec_mb: not in enabled drivers build config 00:06:02.523 crypto/mlx5: not in enabled drivers build config 00:06:02.523 crypto/mvsam: not in enabled drivers build config 00:06:02.523 crypto/nitrox: not in enabled drivers build config 00:06:02.523 crypto/null: not in enabled drivers build config 00:06:02.523 crypto/octeontx: not in enabled drivers build config 00:06:02.523 crypto/openssl: not in enabled drivers build config 00:06:02.523 crypto/scheduler: not in enabled drivers build config 00:06:02.523 crypto/uadk: not in enabled drivers build config 00:06:02.523 crypto/virtio: not in enabled drivers build config 00:06:02.523 compress/isal: not in enabled drivers build config 00:06:02.523 compress/mlx5: not in enabled drivers build config 00:06:02.523 compress/nitrox: not in enabled drivers build config 00:06:02.523 compress/octeontx: not in enabled drivers build config 00:06:02.523 compress/zlib: not in enabled drivers build config 00:06:02.523 regex/*: missing internal dependency, "regexdev" 00:06:02.523 ml/*: missing internal dependency, "mldev" 00:06:02.523 vdpa/ifc: not in enabled drivers build config 00:06:02.523 vdpa/mlx5: not in enabled drivers build config 00:06:02.523 vdpa/nfp: not in enabled drivers build config 00:06:02.523 vdpa/sfc: not in enabled drivers build config 00:06:02.523 event/*: missing internal dependency, "eventdev" 00:06:02.523 baseband/*: missing internal dependency, "bbdev" 00:06:02.523 gpu/*: missing internal dependency, "gpudev" 00:06:02.523 00:06:02.523 00:06:02.523 Build targets in project: 84 00:06:02.523 00:06:02.523 DPDK 24.03.0 00:06:02.523 00:06:02.523 User defined options 00:06:02.523 buildtype : debug 00:06:02.523 default_library : shared 00:06:02.523 libdir : lib 00:06:02.523 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:02.523 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:02.523 c_link_args : 00:06:02.523 cpu_instruction_set: native 00:06:02.523 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:06:02.523 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:06:02.523 enable_docs : false 00:06:02.523 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:02.523 enable_kmods : false 00:06:02.523 max_lcores : 128 00:06:02.523 tests : false 00:06:02.523 00:06:02.523 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:02.523 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:02.523 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:02.523 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:02.523 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:02.523 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:02.523 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:02.523 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:02.523 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:02.523 [8/267] Linking static target lib/librte_kvargs.a 00:06:02.523 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:02.523 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:02.523 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:02.523 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:02.523 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:02.523 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:02.523 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:02.523 [16/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:02.523 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:02.523 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:02.523 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:02.523 [20/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:02.523 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:02.523 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:02.523 [23/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:02.523 [24/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:02.523 [25/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:02.523 [26/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:02.783 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:02.783 [28/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:02.783 [29/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:02.783 [30/267] Linking static target lib/librte_log.a 00:06:02.783 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:02.783 [32/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:02.783 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:02.783 [34/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:02.783 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:02.783 [36/267] Linking static target lib/librte_pci.a 00:06:02.783 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:02.783 [38/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:02.783 [39/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:02.783 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:02.783 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:02.783 [42/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:02.783 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:02.783 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:02.783 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:03.041 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:03.041 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:03.041 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:03.041 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:03.041 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:03.041 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:03.041 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:03.041 [53/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:03.041 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:03.041 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:03.041 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:03.041 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:03.041 [58/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:03.041 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:03.041 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:03.042 [61/267] Linking static target lib/librte_telemetry.a 00:06:03.042 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:03.042 [63/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:03.042 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:03.042 [65/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:03.042 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:03.042 [67/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.042 [68/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:03.042 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:03.042 [70/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:03.042 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:03.042 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:03.042 [73/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:03.042 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:03.042 [75/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:03.042 [76/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:03.042 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:03.042 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:03.042 [79/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:03.042 [80/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:03.042 [81/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:03.042 [82/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:03.042 [83/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.042 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:03.042 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:03.042 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:03.042 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:03.042 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:03.042 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:03.042 [90/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:03.042 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:03.042 [92/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:03.042 [93/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:03.042 [94/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:03.042 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:03.042 [96/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:03.042 [97/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:03.042 [98/267] Linking static target lib/librte_timer.a 00:06:03.042 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:03.042 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:03.042 [101/267] Linking static target lib/librte_cmdline.a 00:06:03.042 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:03.042 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:03.042 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:03.042 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:03.042 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:03.042 [107/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:03.042 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:03.042 [109/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:03.042 [110/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:03.042 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:03.042 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:03.042 [113/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:03.042 [114/267] Linking static target lib/librte_meter.a 00:06:03.042 [115/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:03.042 [116/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:03.042 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:03.042 [118/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:03.042 [119/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:03.042 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:03.042 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:03.042 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:03.042 [123/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:03.042 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:03.302 [125/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:03.302 [126/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:03.302 [127/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:03.302 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:03.302 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:03.302 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:03.302 [131/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:03.302 [132/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:03.302 [133/267] Linking static target lib/librte_dmadev.a 00:06:03.302 [134/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:03.302 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:03.302 [136/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:03.302 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:03.302 [138/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:03.302 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:03.302 [140/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:03.302 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:03.302 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:03.302 [143/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:03.302 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:03.302 [145/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:03.302 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:03.302 [147/267] Linking static target lib/librte_mbuf.a 00:06:03.302 [148/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:03.302 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:03.302 [150/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:03.302 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:03.302 [152/267] Linking static target lib/librte_ring.a 00:06:03.302 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:03.302 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:03.302 [155/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:03.302 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:03.302 [157/267] Linking static target lib/librte_net.a 00:06:03.302 [158/267] Linking static target lib/librte_compressdev.a 00:06:03.302 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:03.302 [160/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:03.302 [161/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:03.302 [162/267] Linking static target lib/librte_power.a 00:06:03.302 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:03.302 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:03.302 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:03.302 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:03.302 [167/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:03.302 [168/267] Linking static target lib/librte_eal.a 00:06:03.302 [169/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:03.302 [170/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:03.302 [171/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:03.302 [172/267] Linking static target lib/librte_mempool.a 00:06:03.302 [173/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.302 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:03.302 [175/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:03.302 [176/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:03.302 [177/267] Linking static target lib/librte_hash.a 00:06:03.302 [178/267] Linking static target lib/librte_security.a 00:06:03.302 [179/267] Linking static target drivers/librte_bus_vdev.a 00:06:03.302 [180/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:03.302 [181/267] Linking static target lib/librte_rcu.a 00:06:03.302 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:03.302 [183/267] Linking target lib/librte_log.so.24.1 00:06:03.302 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:03.302 [185/267] Linking static target lib/librte_reorder.a 00:06:03.563 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:03.563 [187/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:03.563 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:03.563 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:03.563 [190/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.563 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:03.563 [192/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:03.563 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:03.563 [194/267] Linking static target drivers/librte_bus_pci.a 00:06:03.563 [195/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:03.563 [196/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:03.563 [197/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.563 [198/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.563 [199/267] Linking target lib/librte_kvargs.so.24.1 00:06:03.563 [200/267] Linking target lib/librte_telemetry.so.24.1 00:06:03.563 [201/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:03.563 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:03.563 [203/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:03.563 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.563 [205/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.563 [206/267] Linking static target lib/librte_cryptodev.a 00:06:03.824 [207/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:03.824 [208/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:03.824 [209/267] Linking static target drivers/librte_mempool_ring.a 00:06:03.824 [210/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:03.824 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.824 [212/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:03.824 [213/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.824 [214/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:03.824 [215/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.824 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.085 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:04.085 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.085 [219/267] Linking static target lib/librte_ethdev.a 00:06:04.085 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.085 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.345 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.345 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.345 [224/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.345 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.606 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.179 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:05.179 [228/267] Linking static target lib/librte_vhost.a 00:06:05.751 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.135 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.727 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.669 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.931 [233/267] Linking target lib/librte_eal.so.24.1 00:06:14.931 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:14.931 [235/267] Linking target lib/librte_meter.so.24.1 00:06:14.931 [236/267] Linking target lib/librte_ring.so.24.1 00:06:14.931 [237/267] Linking target lib/librte_timer.so.24.1 00:06:14.931 [238/267] Linking target lib/librte_pci.so.24.1 00:06:14.931 [239/267] Linking target lib/librte_dmadev.so.24.1 00:06:14.931 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:15.193 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:15.193 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:15.193 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:15.193 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:15.193 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:15.193 [246/267] Linking target lib/librte_mempool.so.24.1 00:06:15.193 [247/267] Linking target lib/librte_rcu.so.24.1 00:06:15.193 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:15.193 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:15.193 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:15.455 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:15.455 [252/267] Linking target lib/librte_mbuf.so.24.1 00:06:15.455 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:15.455 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:06:15.455 [255/267] Linking target lib/librte_compressdev.so.24.1 00:06:15.455 [256/267] Linking target lib/librte_net.so.24.1 00:06:15.455 [257/267] Linking target lib/librte_reorder.so.24.1 00:06:15.716 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:15.716 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:15.716 [260/267] Linking target lib/librte_security.so.24.1 00:06:15.716 [261/267] Linking target lib/librte_hash.so.24.1 00:06:15.716 [262/267] Linking target lib/librte_cmdline.so.24.1 00:06:15.716 [263/267] Linking target lib/librte_ethdev.so.24.1 00:06:15.979 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:15.979 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:15.979 [266/267] Linking target lib/librte_power.so.24.1 00:06:15.979 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:15.979 INFO: autodetecting backend as ninja 00:06:15.979 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:19.286 CC lib/log/log.o 00:06:19.286 CC lib/ut/ut.o 00:06:19.286 CC lib/log/log_flags.o 00:06:19.286 CC lib/ut_mock/mock.o 00:06:19.286 CC lib/log/log_deprecated.o 00:06:19.547 LIB libspdk_ut.a 00:06:19.547 LIB libspdk_ut_mock.a 00:06:19.547 LIB libspdk_log.a 00:06:19.547 SO libspdk_ut.so.2.0 00:06:19.547 SO libspdk_ut_mock.so.6.0 00:06:19.547 SO libspdk_log.so.7.0 00:06:19.547 SYMLINK libspdk_ut.so 00:06:19.547 SYMLINK libspdk_ut_mock.so 00:06:19.547 SYMLINK libspdk_log.so 00:06:20.117 CC lib/dma/dma.o 00:06:20.117 CXX lib/trace_parser/trace.o 00:06:20.117 CC lib/util/base64.o 00:06:20.117 CC lib/util/bit_array.o 00:06:20.117 CC lib/ioat/ioat.o 00:06:20.117 CC lib/util/cpuset.o 00:06:20.117 CC lib/util/crc16.o 00:06:20.117 CC lib/util/crc32.o 00:06:20.117 CC lib/util/crc32c.o 00:06:20.117 CC lib/util/crc32_ieee.o 00:06:20.117 CC lib/util/crc64.o 00:06:20.117 CC lib/util/dif.o 00:06:20.117 CC lib/util/fd.o 00:06:20.117 CC lib/util/fd_group.o 00:06:20.117 CC lib/util/file.o 00:06:20.117 CC lib/util/hexlify.o 00:06:20.117 CC lib/util/iov.o 00:06:20.117 CC lib/util/math.o 00:06:20.117 CC lib/util/net.o 00:06:20.117 CC lib/util/pipe.o 00:06:20.117 CC lib/util/strerror_tls.o 00:06:20.117 CC lib/util/string.o 00:06:20.117 CC lib/util/uuid.o 00:06:20.117 CC lib/util/xor.o 00:06:20.117 CC lib/util/zipf.o 00:06:20.117 CC lib/util/md5.o 00:06:20.117 CC lib/vfio_user/host/vfio_user_pci.o 00:06:20.117 CC lib/vfio_user/host/vfio_user.o 00:06:20.117 LIB libspdk_dma.a 00:06:20.117 SO libspdk_dma.so.5.0 00:06:20.376 SYMLINK libspdk_dma.so 00:06:20.376 LIB libspdk_ioat.a 00:06:20.376 SO libspdk_ioat.so.7.0 00:06:20.376 SYMLINK libspdk_ioat.so 00:06:20.376 LIB libspdk_util.a 00:06:20.376 LIB libspdk_vfio_user.a 00:06:20.376 SO libspdk_vfio_user.so.5.0 00:06:20.376 SO libspdk_util.so.10.0 00:06:20.637 SYMLINK libspdk_vfio_user.so 00:06:20.637 SYMLINK libspdk_util.so 00:06:20.897 LIB libspdk_trace_parser.a 00:06:20.897 SO libspdk_trace_parser.so.6.0 00:06:20.897 CC lib/rdma_utils/rdma_utils.o 00:06:20.897 CC lib/conf/conf.o 00:06:20.897 CC lib/json/json_parse.o 00:06:20.897 CC lib/json/json_util.o 00:06:20.897 CC lib/rdma_provider/common.o 00:06:20.897 CC lib/json/json_write.o 00:06:20.897 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:20.897 SYMLINK libspdk_trace_parser.so 00:06:20.897 CC lib/vmd/vmd.o 00:06:20.897 CC lib/idxd/idxd.o 00:06:20.897 CC lib/vmd/led.o 00:06:20.897 CC lib/env_dpdk/env.o 00:06:20.897 CC lib/idxd/idxd_user.o 00:06:20.897 CC lib/idxd/idxd_kernel.o 00:06:20.897 CC lib/env_dpdk/memory.o 00:06:20.897 CC lib/env_dpdk/pci.o 00:06:20.897 CC lib/env_dpdk/init.o 00:06:20.897 CC lib/env_dpdk/threads.o 00:06:20.897 CC lib/env_dpdk/pci_ioat.o 00:06:20.897 CC lib/env_dpdk/pci_virtio.o 00:06:20.897 CC lib/env_dpdk/pci_vmd.o 00:06:20.897 CC lib/env_dpdk/pci_idxd.o 00:06:20.897 CC lib/env_dpdk/pci_event.o 00:06:20.897 CC lib/env_dpdk/sigbus_handler.o 00:06:20.897 CC lib/env_dpdk/pci_dpdk.o 00:06:20.897 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:20.897 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:21.158 LIB libspdk_rdma_provider.a 00:06:21.158 LIB libspdk_conf.a 00:06:21.158 SO libspdk_rdma_provider.so.6.0 00:06:21.158 LIB libspdk_rdma_utils.a 00:06:21.158 SO libspdk_conf.so.6.0 00:06:21.158 LIB libspdk_json.a 00:06:21.158 SO libspdk_rdma_utils.so.1.0 00:06:21.158 SYMLINK libspdk_rdma_provider.so 00:06:21.158 SYMLINK libspdk_conf.so 00:06:21.158 SO libspdk_json.so.6.0 00:06:21.419 SYMLINK libspdk_rdma_utils.so 00:06:21.419 SYMLINK libspdk_json.so 00:06:21.419 LIB libspdk_idxd.a 00:06:21.419 SO libspdk_idxd.so.12.1 00:06:21.419 LIB libspdk_vmd.a 00:06:21.681 SO libspdk_vmd.so.6.0 00:06:21.681 SYMLINK libspdk_idxd.so 00:06:21.681 SYMLINK libspdk_vmd.so 00:06:21.681 CC lib/jsonrpc/jsonrpc_server.o 00:06:21.681 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:21.681 CC lib/jsonrpc/jsonrpc_client.o 00:06:21.681 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:21.943 LIB libspdk_jsonrpc.a 00:06:21.943 SO libspdk_jsonrpc.so.6.0 00:06:22.212 SYMLINK libspdk_jsonrpc.so 00:06:22.212 LIB libspdk_env_dpdk.a 00:06:22.212 SO libspdk_env_dpdk.so.15.0 00:06:22.475 SYMLINK libspdk_env_dpdk.so 00:06:22.475 CC lib/rpc/rpc.o 00:06:22.735 LIB libspdk_rpc.a 00:06:22.735 SO libspdk_rpc.so.6.0 00:06:22.735 SYMLINK libspdk_rpc.so 00:06:23.378 CC lib/notify/notify.o 00:06:23.378 CC lib/notify/notify_rpc.o 00:06:23.378 CC lib/keyring/keyring.o 00:06:23.378 CC lib/trace/trace.o 00:06:23.378 CC lib/keyring/keyring_rpc.o 00:06:23.378 CC lib/trace/trace_flags.o 00:06:23.378 CC lib/trace/trace_rpc.o 00:06:23.378 LIB libspdk_notify.a 00:06:23.378 SO libspdk_notify.so.6.0 00:06:23.378 LIB libspdk_trace.a 00:06:23.378 LIB libspdk_keyring.a 00:06:23.378 SO libspdk_keyring.so.2.0 00:06:23.378 SO libspdk_trace.so.11.0 00:06:23.378 SYMLINK libspdk_notify.so 00:06:23.639 SYMLINK libspdk_keyring.so 00:06:23.639 SYMLINK libspdk_trace.so 00:06:23.900 CC lib/sock/sock.o 00:06:23.900 CC lib/thread/thread.o 00:06:23.900 CC lib/sock/sock_rpc.o 00:06:23.900 CC lib/thread/iobuf.o 00:06:24.161 LIB libspdk_sock.a 00:06:24.421 SO libspdk_sock.so.10.0 00:06:24.421 SYMLINK libspdk_sock.so 00:06:24.682 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:24.682 CC lib/nvme/nvme_ctrlr.o 00:06:24.682 CC lib/nvme/nvme_fabric.o 00:06:24.682 CC lib/nvme/nvme_ns_cmd.o 00:06:24.682 CC lib/nvme/nvme_ns.o 00:06:24.682 CC lib/nvme/nvme_pcie_common.o 00:06:24.682 CC lib/nvme/nvme_pcie.o 00:06:24.682 CC lib/nvme/nvme_qpair.o 00:06:24.682 CC lib/nvme/nvme.o 00:06:24.682 CC lib/nvme/nvme_quirks.o 00:06:24.682 CC lib/nvme/nvme_transport.o 00:06:24.682 CC lib/nvme/nvme_discovery.o 00:06:24.682 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:24.682 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:24.682 CC lib/nvme/nvme_tcp.o 00:06:24.682 CC lib/nvme/nvme_opal.o 00:06:24.682 CC lib/nvme/nvme_io_msg.o 00:06:24.682 CC lib/nvme/nvme_poll_group.o 00:06:24.682 CC lib/nvme/nvme_zns.o 00:06:24.682 CC lib/nvme/nvme_stubs.o 00:06:24.682 CC lib/nvme/nvme_auth.o 00:06:24.682 CC lib/nvme/nvme_cuse.o 00:06:24.682 CC lib/nvme/nvme_vfio_user.o 00:06:24.682 CC lib/nvme/nvme_rdma.o 00:06:25.250 LIB libspdk_thread.a 00:06:25.250 SO libspdk_thread.so.10.1 00:06:25.250 SYMLINK libspdk_thread.so 00:06:25.512 CC lib/accel/accel.o 00:06:25.512 CC lib/accel/accel_sw.o 00:06:25.512 CC lib/accel/accel_rpc.o 00:06:25.512 CC lib/init/json_config.o 00:06:25.512 CC lib/init/subsystem.o 00:06:25.512 CC lib/blob/blobstore.o 00:06:25.512 CC lib/virtio/virtio.o 00:06:25.512 CC lib/blob/request.o 00:06:25.512 CC lib/init/subsystem_rpc.o 00:06:25.512 CC lib/virtio/virtio_vhost_user.o 00:06:25.512 CC lib/blob/zeroes.o 00:06:25.512 CC lib/init/rpc.o 00:06:25.512 CC lib/virtio/virtio_vfio_user.o 00:06:25.512 CC lib/blob/blob_bs_dev.o 00:06:25.512 CC lib/fsdev/fsdev.o 00:06:25.512 CC lib/virtio/virtio_pci.o 00:06:25.512 CC lib/fsdev/fsdev_io.o 00:06:25.512 CC lib/fsdev/fsdev_rpc.o 00:06:25.512 CC lib/vfu_tgt/tgt_endpoint.o 00:06:25.512 CC lib/vfu_tgt/tgt_rpc.o 00:06:25.773 LIB libspdk_init.a 00:06:26.034 SO libspdk_init.so.6.0 00:06:26.034 LIB libspdk_virtio.a 00:06:26.034 LIB libspdk_vfu_tgt.a 00:06:26.034 SYMLINK libspdk_init.so 00:06:26.034 SO libspdk_vfu_tgt.so.3.0 00:06:26.034 SO libspdk_virtio.so.7.0 00:06:26.034 SYMLINK libspdk_vfu_tgt.so 00:06:26.034 SYMLINK libspdk_virtio.so 00:06:26.296 LIB libspdk_fsdev.a 00:06:26.296 SO libspdk_fsdev.so.1.0 00:06:26.296 CC lib/event/app.o 00:06:26.296 CC lib/event/reactor.o 00:06:26.296 CC lib/event/log_rpc.o 00:06:26.296 CC lib/event/app_rpc.o 00:06:26.296 CC lib/event/scheduler_static.o 00:06:26.296 SYMLINK libspdk_fsdev.so 00:06:26.558 LIB libspdk_accel.a 00:06:26.558 LIB libspdk_nvme.a 00:06:26.558 SO libspdk_accel.so.16.0 00:06:26.558 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:26.820 SYMLINK libspdk_accel.so 00:06:26.820 LIB libspdk_event.a 00:06:26.820 SO libspdk_nvme.so.14.0 00:06:26.820 SO libspdk_event.so.14.0 00:06:26.820 SYMLINK libspdk_event.so 00:06:27.082 SYMLINK libspdk_nvme.so 00:06:27.082 CC lib/bdev/bdev.o 00:06:27.082 CC lib/bdev/bdev_rpc.o 00:06:27.082 CC lib/bdev/bdev_zone.o 00:06:27.082 CC lib/bdev/part.o 00:06:27.082 CC lib/bdev/scsi_nvme.o 00:06:27.344 LIB libspdk_fuse_dispatcher.a 00:06:27.344 SO libspdk_fuse_dispatcher.so.1.0 00:06:27.344 SYMLINK libspdk_fuse_dispatcher.so 00:06:28.286 LIB libspdk_blob.a 00:06:28.286 SO libspdk_blob.so.11.0 00:06:28.546 SYMLINK libspdk_blob.so 00:06:28.807 CC lib/blobfs/blobfs.o 00:06:28.807 CC lib/blobfs/tree.o 00:06:28.807 CC lib/lvol/lvol.o 00:06:29.377 LIB libspdk_bdev.a 00:06:29.377 SO libspdk_bdev.so.16.0 00:06:29.639 SYMLINK libspdk_bdev.so 00:06:29.639 LIB libspdk_blobfs.a 00:06:29.639 SO libspdk_blobfs.so.10.0 00:06:29.639 LIB libspdk_lvol.a 00:06:29.639 SYMLINK libspdk_blobfs.so 00:06:29.639 SO libspdk_lvol.so.10.0 00:06:29.639 SYMLINK libspdk_lvol.so 00:06:29.900 CC lib/nvmf/ctrlr.o 00:06:29.900 CC lib/nvmf/ctrlr_discovery.o 00:06:29.900 CC lib/nvmf/ctrlr_bdev.o 00:06:29.900 CC lib/ublk/ublk.o 00:06:29.900 CC lib/nvmf/subsystem.o 00:06:29.900 CC lib/ublk/ublk_rpc.o 00:06:29.900 CC lib/nvmf/nvmf.o 00:06:29.900 CC lib/nvmf/nvmf_rpc.o 00:06:29.900 CC lib/nvmf/transport.o 00:06:29.900 CC lib/nvmf/tcp.o 00:06:29.900 CC lib/nvmf/mdns_server.o 00:06:29.900 CC lib/nvmf/stubs.o 00:06:29.901 CC lib/nvmf/vfio_user.o 00:06:29.901 CC lib/nvmf/rdma.o 00:06:29.901 CC lib/scsi/dev.o 00:06:29.901 CC lib/nvmf/auth.o 00:06:29.901 CC lib/scsi/lun.o 00:06:29.901 CC lib/nbd/nbd.o 00:06:29.901 CC lib/scsi/port.o 00:06:29.901 CC lib/nbd/nbd_rpc.o 00:06:29.901 CC lib/scsi/scsi.o 00:06:29.901 CC lib/scsi/scsi_bdev.o 00:06:29.901 CC lib/scsi/scsi_pr.o 00:06:29.901 CC lib/scsi/scsi_rpc.o 00:06:29.901 CC lib/ftl/ftl_core.o 00:06:29.901 CC lib/ftl/ftl_init.o 00:06:29.901 CC lib/scsi/task.o 00:06:29.901 CC lib/ftl/ftl_layout.o 00:06:29.901 CC lib/ftl/ftl_debug.o 00:06:29.901 CC lib/ftl/ftl_io.o 00:06:29.901 CC lib/ftl/ftl_sb.o 00:06:29.901 CC lib/ftl/ftl_l2p.o 00:06:29.901 CC lib/ftl/ftl_l2p_flat.o 00:06:29.901 CC lib/ftl/ftl_nv_cache.o 00:06:29.901 CC lib/ftl/ftl_band.o 00:06:29.901 CC lib/ftl/ftl_band_ops.o 00:06:29.901 CC lib/ftl/ftl_writer.o 00:06:29.901 CC lib/ftl/ftl_rq.o 00:06:29.901 CC lib/ftl/ftl_reloc.o 00:06:29.901 CC lib/ftl/ftl_l2p_cache.o 00:06:29.901 CC lib/ftl/ftl_p2l.o 00:06:29.901 CC lib/ftl/ftl_p2l_log.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:29.901 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:29.901 CC lib/ftl/utils/ftl_conf.o 00:06:29.901 CC lib/ftl/utils/ftl_mempool.o 00:06:29.901 CC lib/ftl/utils/ftl_md.o 00:06:29.901 CC lib/ftl/utils/ftl_bitmap.o 00:06:29.901 CC lib/ftl/utils/ftl_property.o 00:06:29.901 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:29.901 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:29.901 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:29.901 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:29.901 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:29.901 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:29.901 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:29.901 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:29.901 CC lib/ftl/base/ftl_base_dev.o 00:06:29.901 CC lib/ftl/base/ftl_base_bdev.o 00:06:29.901 CC lib/ftl/ftl_trace.o 00:06:30.520 LIB libspdk_nbd.a 00:06:30.520 SO libspdk_nbd.so.7.0 00:06:30.520 SYMLINK libspdk_nbd.so 00:06:31.093 LIB libspdk_scsi.a 00:06:31.093 SO libspdk_scsi.so.9.0 00:06:31.093 LIB libspdk_ublk.a 00:06:31.093 SYMLINK libspdk_scsi.so 00:06:31.093 SO libspdk_ublk.so.3.0 00:06:31.093 SYMLINK libspdk_ublk.so 00:06:31.093 LIB libspdk_ftl.a 00:06:31.354 SO libspdk_ftl.so.9.0 00:06:31.354 CC lib/iscsi/conn.o 00:06:31.354 CC lib/iscsi/init_grp.o 00:06:31.354 CC lib/vhost/vhost.o 00:06:31.354 CC lib/iscsi/iscsi.o 00:06:31.354 CC lib/vhost/vhost_rpc.o 00:06:31.354 CC lib/iscsi/param.o 00:06:31.354 CC lib/vhost/vhost_scsi.o 00:06:31.354 CC lib/iscsi/portal_grp.o 00:06:31.354 CC lib/vhost/vhost_blk.o 00:06:31.354 CC lib/iscsi/tgt_node.o 00:06:31.354 CC lib/iscsi/iscsi_subsystem.o 00:06:31.354 CC lib/vhost/rte_vhost_user.o 00:06:31.354 CC lib/iscsi/iscsi_rpc.o 00:06:31.354 CC lib/iscsi/task.o 00:06:31.615 SYMLINK libspdk_ftl.so 00:06:32.186 LIB libspdk_nvmf.a 00:06:32.186 SO libspdk_nvmf.so.19.0 00:06:32.186 SYMLINK libspdk_nvmf.so 00:06:32.448 LIB libspdk_vhost.a 00:06:32.448 SO libspdk_vhost.so.8.0 00:06:32.448 SYMLINK libspdk_vhost.so 00:06:32.709 LIB libspdk_iscsi.a 00:06:32.709 SO libspdk_iscsi.so.8.0 00:06:32.970 SYMLINK libspdk_iscsi.so 00:06:33.541 CC module/env_dpdk/env_dpdk_rpc.o 00:06:33.541 CC module/vfu_device/vfu_virtio.o 00:06:33.541 CC module/vfu_device/vfu_virtio_blk.o 00:06:33.541 CC module/vfu_device/vfu_virtio_scsi.o 00:06:33.541 CC module/vfu_device/vfu_virtio_rpc.o 00:06:33.541 CC module/vfu_device/vfu_virtio_fs.o 00:06:33.541 LIB libspdk_env_dpdk_rpc.a 00:06:33.541 CC module/accel/ioat/accel_ioat_rpc.o 00:06:33.541 CC module/accel/ioat/accel_ioat.o 00:06:33.541 CC module/sock/posix/posix.o 00:06:33.541 CC module/keyring/linux/keyring.o 00:06:33.541 CC module/accel/iaa/accel_iaa.o 00:06:33.541 CC module/keyring/linux/keyring_rpc.o 00:06:33.541 CC module/accel/iaa/accel_iaa_rpc.o 00:06:33.541 CC module/blob/bdev/blob_bdev.o 00:06:33.541 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:33.541 SO libspdk_env_dpdk_rpc.so.6.0 00:06:33.541 CC module/fsdev/aio/fsdev_aio.o 00:06:33.541 CC module/accel/error/accel_error.o 00:06:33.541 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:33.541 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:33.541 CC module/accel/error/accel_error_rpc.o 00:06:33.541 CC module/fsdev/aio/linux_aio_mgr.o 00:06:33.541 CC module/accel/dsa/accel_dsa.o 00:06:33.541 CC module/accel/dsa/accel_dsa_rpc.o 00:06:33.541 CC module/keyring/file/keyring.o 00:06:33.541 CC module/keyring/file/keyring_rpc.o 00:06:33.541 CC module/scheduler/gscheduler/gscheduler.o 00:06:33.802 SYMLINK libspdk_env_dpdk_rpc.so 00:06:33.802 LIB libspdk_keyring_linux.a 00:06:33.802 LIB libspdk_scheduler_dpdk_governor.a 00:06:33.802 LIB libspdk_keyring_file.a 00:06:33.802 LIB libspdk_scheduler_gscheduler.a 00:06:33.802 LIB libspdk_accel_ioat.a 00:06:33.802 SO libspdk_keyring_linux.so.1.0 00:06:33.802 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:33.802 SO libspdk_keyring_file.so.2.0 00:06:33.802 LIB libspdk_accel_iaa.a 00:06:33.802 LIB libspdk_accel_error.a 00:06:33.802 LIB libspdk_scheduler_dynamic.a 00:06:33.802 SO libspdk_scheduler_gscheduler.so.4.0 00:06:33.802 SO libspdk_accel_ioat.so.6.0 00:06:33.802 SO libspdk_accel_error.so.2.0 00:06:33.802 SO libspdk_scheduler_dynamic.so.4.0 00:06:33.802 SO libspdk_accel_iaa.so.3.0 00:06:33.802 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:33.802 LIB libspdk_accel_dsa.a 00:06:33.802 SYMLINK libspdk_keyring_linux.so 00:06:33.802 SYMLINK libspdk_keyring_file.so 00:06:34.071 SYMLINK libspdk_scheduler_gscheduler.so 00:06:34.071 LIB libspdk_blob_bdev.a 00:06:34.071 SYMLINK libspdk_accel_error.so 00:06:34.071 SO libspdk_accel_dsa.so.5.0 00:06:34.071 SYMLINK libspdk_accel_ioat.so 00:06:34.071 SYMLINK libspdk_scheduler_dynamic.so 00:06:34.071 SO libspdk_blob_bdev.so.11.0 00:06:34.071 SYMLINK libspdk_accel_iaa.so 00:06:34.071 LIB libspdk_vfu_device.a 00:06:34.071 SYMLINK libspdk_accel_dsa.so 00:06:34.071 SYMLINK libspdk_blob_bdev.so 00:06:34.071 SO libspdk_vfu_device.so.3.0 00:06:34.071 SYMLINK libspdk_vfu_device.so 00:06:34.333 LIB libspdk_fsdev_aio.a 00:06:34.333 SO libspdk_fsdev_aio.so.1.0 00:06:34.333 LIB libspdk_sock_posix.a 00:06:34.333 SO libspdk_sock_posix.so.6.0 00:06:34.333 SYMLINK libspdk_fsdev_aio.so 00:06:34.596 SYMLINK libspdk_sock_posix.so 00:06:34.596 CC module/bdev/delay/vbdev_delay.o 00:06:34.596 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:34.596 CC module/bdev/gpt/gpt.o 00:06:34.596 CC module/bdev/gpt/vbdev_gpt.o 00:06:34.596 CC module/bdev/lvol/vbdev_lvol.o 00:06:34.596 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:34.596 CC module/bdev/malloc/bdev_malloc.o 00:06:34.596 CC module/blobfs/bdev/blobfs_bdev.o 00:06:34.596 CC module/bdev/error/vbdev_error.o 00:06:34.596 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:34.596 CC module/bdev/error/vbdev_error_rpc.o 00:06:34.596 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:34.596 CC module/bdev/iscsi/bdev_iscsi.o 00:06:34.596 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:34.596 CC module/bdev/nvme/bdev_nvme.o 00:06:34.596 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:34.596 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:34.596 CC module/bdev/nvme/nvme_rpc.o 00:06:34.596 CC module/bdev/nvme/bdev_mdns_client.o 00:06:34.596 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:34.596 CC module/bdev/nvme/vbdev_opal.o 00:06:34.596 CC module/bdev/aio/bdev_aio.o 00:06:34.596 CC module/bdev/null/bdev_null.o 00:06:34.596 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:34.596 CC module/bdev/split/vbdev_split.o 00:06:34.596 CC module/bdev/aio/bdev_aio_rpc.o 00:06:34.596 CC module/bdev/null/bdev_null_rpc.o 00:06:34.596 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:34.596 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:34.596 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:34.596 CC module/bdev/passthru/vbdev_passthru.o 00:06:34.596 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:34.596 CC module/bdev/split/vbdev_split_rpc.o 00:06:34.596 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:34.596 CC module/bdev/raid/bdev_raid.o 00:06:34.596 CC module/bdev/raid/bdev_raid_rpc.o 00:06:34.596 CC module/bdev/ftl/bdev_ftl.o 00:06:34.596 CC module/bdev/raid/bdev_raid_sb.o 00:06:34.596 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:34.596 CC module/bdev/raid/raid0.o 00:06:34.596 CC module/bdev/raid/raid1.o 00:06:34.596 CC module/bdev/raid/concat.o 00:06:34.856 LIB libspdk_blobfs_bdev.a 00:06:34.856 SO libspdk_blobfs_bdev.so.6.0 00:06:34.856 LIB libspdk_bdev_error.a 00:06:35.116 LIB libspdk_bdev_null.a 00:06:35.116 SYMLINK libspdk_blobfs_bdev.so 00:06:35.116 LIB libspdk_bdev_ftl.a 00:06:35.116 LIB libspdk_bdev_gpt.a 00:06:35.116 LIB libspdk_bdev_split.a 00:06:35.116 SO libspdk_bdev_error.so.6.0 00:06:35.116 LIB libspdk_bdev_passthru.a 00:06:35.116 SO libspdk_bdev_null.so.6.0 00:06:35.116 SO libspdk_bdev_gpt.so.6.0 00:06:35.116 SO libspdk_bdev_split.so.6.0 00:06:35.116 SO libspdk_bdev_ftl.so.6.0 00:06:35.116 SO libspdk_bdev_passthru.so.6.0 00:06:35.116 SYMLINK libspdk_bdev_error.so 00:06:35.116 SYMLINK libspdk_bdev_gpt.so 00:06:35.116 SYMLINK libspdk_bdev_null.so 00:06:35.116 LIB libspdk_bdev_delay.a 00:06:35.116 SYMLINK libspdk_bdev_ftl.so 00:06:35.116 SYMLINK libspdk_bdev_split.so 00:06:35.116 LIB libspdk_bdev_aio.a 00:06:35.116 SYMLINK libspdk_bdev_passthru.so 00:06:35.116 LIB libspdk_bdev_zone_block.a 00:06:35.116 SO libspdk_bdev_delay.so.6.0 00:06:35.116 LIB libspdk_bdev_malloc.a 00:06:35.116 LIB libspdk_bdev_iscsi.a 00:06:35.116 SO libspdk_bdev_aio.so.6.0 00:06:35.116 SO libspdk_bdev_zone_block.so.6.0 00:06:35.116 SO libspdk_bdev_malloc.so.6.0 00:06:35.116 SO libspdk_bdev_iscsi.so.6.0 00:06:35.116 SYMLINK libspdk_bdev_delay.so 00:06:35.377 SYMLINK libspdk_bdev_aio.so 00:06:35.377 SYMLINK libspdk_bdev_zone_block.so 00:06:35.377 LIB libspdk_bdev_lvol.a 00:06:35.377 SYMLINK libspdk_bdev_malloc.so 00:06:35.377 SYMLINK libspdk_bdev_iscsi.so 00:06:35.377 LIB libspdk_bdev_virtio.a 00:06:35.377 SO libspdk_bdev_lvol.so.6.0 00:06:35.377 SO libspdk_bdev_virtio.so.6.0 00:06:35.377 SYMLINK libspdk_bdev_lvol.so 00:06:35.377 SYMLINK libspdk_bdev_virtio.so 00:06:35.662 LIB libspdk_bdev_raid.a 00:06:35.662 SO libspdk_bdev_raid.so.6.0 00:06:35.924 SYMLINK libspdk_bdev_raid.so 00:06:36.961 LIB libspdk_bdev_nvme.a 00:06:36.961 SO libspdk_bdev_nvme.so.7.0 00:06:36.961 SYMLINK libspdk_bdev_nvme.so 00:06:37.592 CC module/event/subsystems/sock/sock.o 00:06:37.592 CC module/event/subsystems/vmd/vmd.o 00:06:37.592 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:37.592 CC module/event/subsystems/iobuf/iobuf.o 00:06:37.592 CC module/event/subsystems/scheduler/scheduler.o 00:06:37.592 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:37.592 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:37.592 CC module/event/subsystems/fsdev/fsdev.o 00:06:37.592 CC module/event/subsystems/keyring/keyring.o 00:06:37.592 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:37.854 LIB libspdk_event_vhost_blk.a 00:06:37.854 LIB libspdk_event_fsdev.a 00:06:37.854 LIB libspdk_event_vfu_tgt.a 00:06:37.854 LIB libspdk_event_keyring.a 00:06:37.854 LIB libspdk_event_sock.a 00:06:37.854 LIB libspdk_event_scheduler.a 00:06:37.854 LIB libspdk_event_vmd.a 00:06:37.854 LIB libspdk_event_iobuf.a 00:06:37.854 SO libspdk_event_vhost_blk.so.3.0 00:06:37.854 SO libspdk_event_vfu_tgt.so.3.0 00:06:37.854 SO libspdk_event_fsdev.so.1.0 00:06:37.854 SO libspdk_event_scheduler.so.4.0 00:06:37.854 SO libspdk_event_sock.so.5.0 00:06:37.854 SO libspdk_event_keyring.so.1.0 00:06:37.854 SO libspdk_event_vmd.so.6.0 00:06:37.854 SO libspdk_event_iobuf.so.3.0 00:06:37.854 SYMLINK libspdk_event_fsdev.so 00:06:37.854 SYMLINK libspdk_event_vhost_blk.so 00:06:37.854 SYMLINK libspdk_event_vfu_tgt.so 00:06:37.854 SYMLINK libspdk_event_scheduler.so 00:06:37.854 SYMLINK libspdk_event_keyring.so 00:06:37.854 SYMLINK libspdk_event_sock.so 00:06:37.854 SYMLINK libspdk_event_vmd.so 00:06:37.854 SYMLINK libspdk_event_iobuf.so 00:06:38.426 CC module/event/subsystems/accel/accel.o 00:06:38.426 LIB libspdk_event_accel.a 00:06:38.426 SO libspdk_event_accel.so.6.0 00:06:38.426 SYMLINK libspdk_event_accel.so 00:06:38.999 CC module/event/subsystems/bdev/bdev.o 00:06:38.999 LIB libspdk_event_bdev.a 00:06:38.999 SO libspdk_event_bdev.so.6.0 00:06:39.261 SYMLINK libspdk_event_bdev.so 00:06:39.522 CC module/event/subsystems/nbd/nbd.o 00:06:39.522 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:39.522 CC module/event/subsystems/scsi/scsi.o 00:06:39.522 CC module/event/subsystems/ublk/ublk.o 00:06:39.522 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:39.783 LIB libspdk_event_nbd.a 00:06:39.783 LIB libspdk_event_scsi.a 00:06:39.783 LIB libspdk_event_ublk.a 00:06:39.783 SO libspdk_event_nbd.so.6.0 00:06:39.783 SO libspdk_event_ublk.so.3.0 00:06:39.783 SO libspdk_event_scsi.so.6.0 00:06:39.783 LIB libspdk_event_nvmf.a 00:06:39.783 SYMLINK libspdk_event_ublk.so 00:06:39.783 SYMLINK libspdk_event_nbd.so 00:06:39.783 SYMLINK libspdk_event_scsi.so 00:06:39.783 SO libspdk_event_nvmf.so.6.0 00:06:40.044 SYMLINK libspdk_event_nvmf.so 00:06:40.306 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:40.306 CC module/event/subsystems/iscsi/iscsi.o 00:06:40.306 LIB libspdk_event_vhost_scsi.a 00:06:40.306 LIB libspdk_event_iscsi.a 00:06:40.306 SO libspdk_event_vhost_scsi.so.3.0 00:06:40.306 SO libspdk_event_iscsi.so.6.0 00:06:40.567 SYMLINK libspdk_event_vhost_scsi.so 00:06:40.567 SYMLINK libspdk_event_iscsi.so 00:06:40.567 SO libspdk.so.6.0 00:06:40.567 SYMLINK libspdk.so 00:06:41.140 TEST_HEADER include/spdk/accel.h 00:06:41.140 TEST_HEADER include/spdk/accel_module.h 00:06:41.140 TEST_HEADER include/spdk/assert.h 00:06:41.140 TEST_HEADER include/spdk/barrier.h 00:06:41.140 TEST_HEADER include/spdk/base64.h 00:06:41.140 TEST_HEADER include/spdk/bdev.h 00:06:41.140 TEST_HEADER include/spdk/bdev_module.h 00:06:41.140 TEST_HEADER include/spdk/bdev_zone.h 00:06:41.140 TEST_HEADER include/spdk/bit_array.h 00:06:41.140 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:41.140 TEST_HEADER include/spdk/blob_bdev.h 00:06:41.140 TEST_HEADER include/spdk/bit_pool.h 00:06:41.140 CC app/spdk_lspci/spdk_lspci.o 00:06:41.140 CC app/spdk_top/spdk_top.o 00:06:41.140 TEST_HEADER include/spdk/blobfs.h 00:06:41.140 TEST_HEADER include/spdk/blob.h 00:06:41.140 CC test/rpc_client/rpc_client_test.o 00:06:41.140 TEST_HEADER include/spdk/conf.h 00:06:41.140 CC app/spdk_nvme_perf/perf.o 00:06:41.140 CC app/spdk_nvme_discover/discovery_aer.o 00:06:41.140 TEST_HEADER include/spdk/config.h 00:06:41.140 CC app/trace_record/trace_record.o 00:06:41.140 CC app/spdk_nvme_identify/identify.o 00:06:41.140 TEST_HEADER include/spdk/crc16.h 00:06:41.140 TEST_HEADER include/spdk/cpuset.h 00:06:41.140 TEST_HEADER include/spdk/crc32.h 00:06:41.140 TEST_HEADER include/spdk/crc64.h 00:06:41.140 CXX app/trace/trace.o 00:06:41.140 TEST_HEADER include/spdk/dif.h 00:06:41.140 TEST_HEADER include/spdk/dma.h 00:06:41.140 TEST_HEADER include/spdk/endian.h 00:06:41.140 TEST_HEADER include/spdk/env_dpdk.h 00:06:41.140 TEST_HEADER include/spdk/env.h 00:06:41.140 TEST_HEADER include/spdk/event.h 00:06:41.140 TEST_HEADER include/spdk/fd_group.h 00:06:41.140 TEST_HEADER include/spdk/fd.h 00:06:41.140 TEST_HEADER include/spdk/file.h 00:06:41.140 TEST_HEADER include/spdk/fsdev.h 00:06:41.140 TEST_HEADER include/spdk/fsdev_module.h 00:06:41.140 TEST_HEADER include/spdk/ftl.h 00:06:41.140 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:41.140 TEST_HEADER include/spdk/gpt_spec.h 00:06:41.140 TEST_HEADER include/spdk/hexlify.h 00:06:41.140 TEST_HEADER include/spdk/histogram_data.h 00:06:41.140 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:41.140 TEST_HEADER include/spdk/idxd.h 00:06:41.140 TEST_HEADER include/spdk/idxd_spec.h 00:06:41.140 TEST_HEADER include/spdk/init.h 00:06:41.140 TEST_HEADER include/spdk/ioat_spec.h 00:06:41.140 TEST_HEADER include/spdk/ioat.h 00:06:41.140 TEST_HEADER include/spdk/iscsi_spec.h 00:06:41.140 TEST_HEADER include/spdk/json.h 00:06:41.140 TEST_HEADER include/spdk/keyring.h 00:06:41.140 TEST_HEADER include/spdk/jsonrpc.h 00:06:41.140 TEST_HEADER include/spdk/keyring_module.h 00:06:41.140 CC app/iscsi_tgt/iscsi_tgt.o 00:06:41.140 CC app/spdk_dd/spdk_dd.o 00:06:41.140 TEST_HEADER include/spdk/likely.h 00:06:41.140 TEST_HEADER include/spdk/lvol.h 00:06:41.140 TEST_HEADER include/spdk/log.h 00:06:41.140 TEST_HEADER include/spdk/md5.h 00:06:41.140 CC app/nvmf_tgt/nvmf_main.o 00:06:41.140 TEST_HEADER include/spdk/mmio.h 00:06:41.140 TEST_HEADER include/spdk/memory.h 00:06:41.140 TEST_HEADER include/spdk/nbd.h 00:06:41.140 TEST_HEADER include/spdk/net.h 00:06:41.140 TEST_HEADER include/spdk/notify.h 00:06:41.140 TEST_HEADER include/spdk/nvme_intel.h 00:06:41.140 TEST_HEADER include/spdk/nvme.h 00:06:41.140 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:41.140 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:41.140 TEST_HEADER include/spdk/nvme_spec.h 00:06:41.140 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:41.140 TEST_HEADER include/spdk/nvme_zns.h 00:06:41.140 TEST_HEADER include/spdk/nvmf.h 00:06:41.140 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:41.140 CC app/spdk_tgt/spdk_tgt.o 00:06:41.140 TEST_HEADER include/spdk/nvmf_spec.h 00:06:41.140 TEST_HEADER include/spdk/nvmf_transport.h 00:06:41.140 TEST_HEADER include/spdk/opal.h 00:06:41.140 TEST_HEADER include/spdk/pipe.h 00:06:41.140 TEST_HEADER include/spdk/opal_spec.h 00:06:41.140 TEST_HEADER include/spdk/pci_ids.h 00:06:41.141 TEST_HEADER include/spdk/queue.h 00:06:41.141 TEST_HEADER include/spdk/reduce.h 00:06:41.141 TEST_HEADER include/spdk/rpc.h 00:06:41.141 TEST_HEADER include/spdk/scheduler.h 00:06:41.141 TEST_HEADER include/spdk/scsi.h 00:06:41.141 TEST_HEADER include/spdk/scsi_spec.h 00:06:41.141 TEST_HEADER include/spdk/sock.h 00:06:41.141 TEST_HEADER include/spdk/stdinc.h 00:06:41.141 TEST_HEADER include/spdk/thread.h 00:06:41.141 TEST_HEADER include/spdk/string.h 00:06:41.141 TEST_HEADER include/spdk/trace.h 00:06:41.141 TEST_HEADER include/spdk/trace_parser.h 00:06:41.141 TEST_HEADER include/spdk/tree.h 00:06:41.141 TEST_HEADER include/spdk/ublk.h 00:06:41.141 TEST_HEADER include/spdk/version.h 00:06:41.141 TEST_HEADER include/spdk/util.h 00:06:41.141 TEST_HEADER include/spdk/uuid.h 00:06:41.141 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:41.141 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:41.141 TEST_HEADER include/spdk/vmd.h 00:06:41.141 TEST_HEADER include/spdk/xor.h 00:06:41.141 TEST_HEADER include/spdk/vhost.h 00:06:41.141 CXX test/cpp_headers/accel.o 00:06:41.141 TEST_HEADER include/spdk/zipf.h 00:06:41.141 CXX test/cpp_headers/accel_module.o 00:06:41.141 CXX test/cpp_headers/assert.o 00:06:41.141 CXX test/cpp_headers/bdev.o 00:06:41.141 CXX test/cpp_headers/base64.o 00:06:41.141 CXX test/cpp_headers/bdev_zone.o 00:06:41.141 CXX test/cpp_headers/bdev_module.o 00:06:41.141 CXX test/cpp_headers/barrier.o 00:06:41.141 CXX test/cpp_headers/bit_pool.o 00:06:41.141 CXX test/cpp_headers/blob_bdev.o 00:06:41.141 CXX test/cpp_headers/blobfs_bdev.o 00:06:41.141 CXX test/cpp_headers/bit_array.o 00:06:41.141 CXX test/cpp_headers/blobfs.o 00:06:41.141 CXX test/cpp_headers/conf.o 00:06:41.141 CXX test/cpp_headers/config.o 00:06:41.141 CXX test/cpp_headers/cpuset.o 00:06:41.141 CXX test/cpp_headers/blob.o 00:06:41.141 CXX test/cpp_headers/crc32.o 00:06:41.141 CXX test/cpp_headers/crc16.o 00:06:41.141 CXX test/cpp_headers/dif.o 00:06:41.141 CXX test/cpp_headers/crc64.o 00:06:41.141 CXX test/cpp_headers/endian.o 00:06:41.141 CXX test/cpp_headers/env_dpdk.o 00:06:41.416 CXX test/cpp_headers/dma.o 00:06:41.416 CXX test/cpp_headers/env.o 00:06:41.416 CXX test/cpp_headers/event.o 00:06:41.416 CXX test/cpp_headers/fd_group.o 00:06:41.416 CXX test/cpp_headers/fd.o 00:06:41.416 CXX test/cpp_headers/fsdev_module.o 00:06:41.416 CXX test/cpp_headers/fsdev.o 00:06:41.416 CXX test/cpp_headers/file.o 00:06:41.416 CXX test/cpp_headers/ftl.o 00:06:41.416 CXX test/cpp_headers/fuse_dispatcher.o 00:06:41.416 CXX test/cpp_headers/hexlify.o 00:06:41.416 CXX test/cpp_headers/histogram_data.o 00:06:41.416 CXX test/cpp_headers/gpt_spec.o 00:06:41.416 CXX test/cpp_headers/idxd.o 00:06:41.416 CXX test/cpp_headers/init.o 00:06:41.416 CXX test/cpp_headers/idxd_spec.o 00:06:41.416 CXX test/cpp_headers/ioat_spec.o 00:06:41.416 CXX test/cpp_headers/iscsi_spec.o 00:06:41.416 CXX test/cpp_headers/ioat.o 00:06:41.416 CXX test/cpp_headers/json.o 00:06:41.416 CXX test/cpp_headers/keyring_module.o 00:06:41.416 CXX test/cpp_headers/jsonrpc.o 00:06:41.416 CXX test/cpp_headers/keyring.o 00:06:41.416 CXX test/cpp_headers/lvol.o 00:06:41.416 CXX test/cpp_headers/likely.o 00:06:41.416 CXX test/cpp_headers/log.o 00:06:41.416 CXX test/cpp_headers/memory.o 00:06:41.416 CXX test/cpp_headers/md5.o 00:06:41.416 CXX test/cpp_headers/nbd.o 00:06:41.416 CXX test/cpp_headers/mmio.o 00:06:41.416 CXX test/cpp_headers/notify.o 00:06:41.416 CXX test/cpp_headers/net.o 00:06:41.416 CXX test/cpp_headers/nvme_ocssd.o 00:06:41.416 CC examples/ioat/verify/verify.o 00:06:41.416 CXX test/cpp_headers/nvme.o 00:06:41.416 CXX test/cpp_headers/nvme_spec.o 00:06:41.416 CXX test/cpp_headers/nvme_intel.o 00:06:41.416 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:41.416 CC test/env/vtophys/vtophys.o 00:06:41.416 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:41.416 CXX test/cpp_headers/nvme_zns.o 00:06:41.416 CXX test/cpp_headers/nvmf_spec.o 00:06:41.416 CXX test/cpp_headers/nvmf_cmd.o 00:06:41.416 CC test/thread/poller_perf/poller_perf.o 00:06:41.416 CXX test/cpp_headers/nvmf.o 00:06:41.416 CXX test/cpp_headers/pipe.o 00:06:41.416 CXX test/cpp_headers/nvmf_transport.o 00:06:41.416 CXX test/cpp_headers/opal_spec.o 00:06:41.416 CXX test/cpp_headers/opal.o 00:06:41.416 CXX test/cpp_headers/pci_ids.o 00:06:41.416 CC test/app/stub/stub.o 00:06:41.416 LINK rpc_client_test 00:06:41.416 CXX test/cpp_headers/reduce.o 00:06:41.416 CXX test/cpp_headers/queue.o 00:06:41.416 CC examples/util/zipf/zipf.o 00:06:41.416 CXX test/cpp_headers/rpc.o 00:06:41.416 CXX test/cpp_headers/scsi.o 00:06:41.416 CXX test/cpp_headers/scheduler.o 00:06:41.416 CXX test/cpp_headers/scsi_spec.o 00:06:41.416 CC test/app/histogram_perf/histogram_perf.o 00:06:41.416 CC test/env/memory/memory_ut.o 00:06:41.416 CC test/app/bdev_svc/bdev_svc.o 00:06:41.416 CXX test/cpp_headers/sock.o 00:06:41.416 CXX test/cpp_headers/thread.o 00:06:41.416 CXX test/cpp_headers/stdinc.o 00:06:41.416 CC test/app/jsoncat/jsoncat.o 00:06:41.416 CXX test/cpp_headers/trace_parser.o 00:06:41.416 CXX test/cpp_headers/string.o 00:06:41.416 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:41.416 CXX test/cpp_headers/tree.o 00:06:41.416 CXX test/cpp_headers/trace.o 00:06:41.416 CXX test/cpp_headers/ublk.o 00:06:41.416 CXX test/cpp_headers/util.o 00:06:41.416 CXX test/cpp_headers/uuid.o 00:06:41.416 CXX test/cpp_headers/version.o 00:06:41.416 CXX test/cpp_headers/vfio_user_pci.o 00:06:41.416 CXX test/cpp_headers/vhost.o 00:06:41.416 CC test/dma/test_dma/test_dma.o 00:06:41.416 CC test/env/pci/pci_ut.o 00:06:41.707 CXX test/cpp_headers/vfio_user_spec.o 00:06:41.707 CXX test/cpp_headers/vmd.o 00:06:41.707 CXX test/cpp_headers/xor.o 00:06:41.707 CC app/fio/nvme/fio_plugin.o 00:06:41.707 CXX test/cpp_headers/zipf.o 00:06:41.707 LINK interrupt_tgt 00:06:41.707 CC app/fio/bdev/fio_plugin.o 00:06:41.707 LINK spdk_nvme_discover 00:06:41.707 CC examples/ioat/perf/perf.o 00:06:41.986 LINK spdk_lspci 00:06:42.265 LINK vtophys 00:06:42.265 CC test/env/mem_callbacks/mem_callbacks.o 00:06:42.265 LINK spdk_dd 00:06:42.265 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:42.265 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:42.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:42.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:42.265 LINK histogram_perf 00:06:42.265 LINK poller_perf 00:06:42.265 LINK spdk_trace_record 00:06:42.265 LINK nvmf_tgt 00:06:42.534 LINK env_dpdk_post_init 00:06:42.534 LINK spdk_tgt 00:06:42.534 LINK bdev_svc 00:06:42.534 LINK zipf 00:06:42.534 LINK iscsi_tgt 00:06:42.795 LINK pci_ut 00:06:42.795 LINK nvme_fuzz 00:06:42.795 LINK vhost_fuzz 00:06:42.795 LINK jsoncat 00:06:43.060 LINK spdk_nvme_perf 00:06:43.060 CC test/event/reactor/reactor.o 00:06:43.060 LINK mem_callbacks 00:06:43.060 CC test/event/event_perf/event_perf.o 00:06:43.060 CC test/event/reactor_perf/reactor_perf.o 00:06:43.060 LINK stub 00:06:43.060 CC test/event/app_repeat/app_repeat.o 00:06:43.060 CC test/event/scheduler/scheduler.o 00:06:43.060 LINK ioat_perf 00:06:43.060 LINK verify 00:06:43.060 CC examples/vmd/lsvmd/lsvmd.o 00:06:43.060 LINK spdk_trace 00:06:43.060 CC examples/vmd/led/led.o 00:06:43.060 CC examples/sock/hello_world/hello_sock.o 00:06:43.060 CC examples/idxd/perf/perf.o 00:06:43.060 LINK reactor_perf 00:06:43.060 LINK reactor 00:06:43.323 LINK event_perf 00:06:43.323 CC examples/thread/thread/thread_ex.o 00:06:43.323 LINK app_repeat 00:06:43.323 LINK lsvmd 00:06:43.323 LINK led 00:06:43.323 LINK scheduler 00:06:43.323 LINK spdk_bdev 00:06:43.323 LINK test_dma 00:06:43.323 LINK hello_sock 00:06:43.323 LINK spdk_nvme 00:06:43.585 LINK memory_ut 00:06:43.585 LINK idxd_perf 00:06:43.585 LINK thread 00:06:43.585 LINK spdk_top 00:06:43.585 LINK spdk_nvme_identify 00:06:43.585 CC app/vhost/vhost.o 00:06:43.848 LINK vhost 00:06:43.848 CC examples/nvme/reconnect/reconnect.o 00:06:43.848 CC examples/nvme/hotplug/hotplug.o 00:06:43.848 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:43.848 CC examples/nvme/abort/abort.o 00:06:43.848 CC examples/nvme/hello_world/hello_world.o 00:06:43.848 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:44.108 CC examples/nvme/arbitration/arbitration.o 00:06:44.108 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:44.108 CC test/nvme/reserve/reserve.o 00:06:44.108 CC test/nvme/boot_partition/boot_partition.o 00:06:44.108 CC test/nvme/fdp/fdp.o 00:06:44.108 CC test/nvme/e2edp/nvme_dp.o 00:06:44.108 CC test/nvme/fused_ordering/fused_ordering.o 00:06:44.109 CC test/nvme/connect_stress/connect_stress.o 00:06:44.109 CC test/nvme/sgl/sgl.o 00:06:44.109 CC test/nvme/overhead/overhead.o 00:06:44.109 CC test/nvme/startup/startup.o 00:06:44.109 CC test/nvme/simple_copy/simple_copy.o 00:06:44.109 CC test/nvme/err_injection/err_injection.o 00:06:44.109 CC test/nvme/aer/aer.o 00:06:44.109 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:44.109 CC test/nvme/reset/reset.o 00:06:44.109 CC test/nvme/cuse/cuse.o 00:06:44.109 LINK iscsi_fuzz 00:06:44.109 CC test/nvme/compliance/nvme_compliance.o 00:06:44.109 CC test/blobfs/mkfs/mkfs.o 00:06:44.109 CC test/accel/dif/dif.o 00:06:44.109 CC examples/accel/perf/accel_perf.o 00:06:44.109 CC examples/blob/cli/blobcli.o 00:06:44.109 CC examples/blob/hello_world/hello_blob.o 00:06:44.109 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:44.109 CC test/lvol/esnap/esnap.o 00:06:44.109 LINK pmr_persistence 00:06:44.109 LINK cmb_copy 00:06:44.109 LINK hello_world 00:06:44.370 LINK boot_partition 00:06:44.370 LINK hotplug 00:06:44.370 LINK startup 00:06:44.370 LINK fused_ordering 00:06:44.370 LINK reserve 00:06:44.370 LINK err_injection 00:06:44.370 LINK doorbell_aers 00:06:44.370 LINK connect_stress 00:06:44.370 LINK simple_copy 00:06:44.370 LINK mkfs 00:06:44.370 LINK sgl 00:06:44.370 LINK reconnect 00:06:44.370 LINK reset 00:06:44.370 LINK nvme_dp 00:06:44.370 LINK overhead 00:06:44.370 LINK aer 00:06:44.370 LINK arbitration 00:06:44.370 LINK nvme_compliance 00:06:44.370 LINK fdp 00:06:44.370 LINK abort 00:06:44.370 LINK hello_blob 00:06:44.370 LINK hello_fsdev 00:06:44.370 LINK nvme_manage 00:06:44.631 LINK accel_perf 00:06:44.631 LINK blobcli 00:06:44.631 LINK dif 00:06:45.202 CC examples/bdev/hello_world/hello_bdev.o 00:06:45.202 CC examples/bdev/bdevperf/bdevperf.o 00:06:45.202 LINK cuse 00:06:45.462 CC test/bdev/bdevio/bdevio.o 00:06:45.462 LINK hello_bdev 00:06:45.722 LINK bdevio 00:06:45.982 LINK bdevperf 00:06:46.553 CC examples/nvmf/nvmf/nvmf.o 00:06:46.813 LINK nvmf 00:06:48.723 LINK esnap 00:06:48.983 00:06:48.983 real 0m56.159s 00:06:48.983 user 8m8.454s 00:06:48.983 sys 6m20.800s 00:06:48.983 22:35:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:48.983 22:35:15 make -- common/autotest_common.sh@10 -- $ set +x 00:06:48.983 ************************************ 00:06:48.983 END TEST make 00:06:48.983 ************************************ 00:06:48.983 22:35:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:48.983 22:35:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:48.983 22:35:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:48.983 22:35:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.983 22:35:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:48.983 22:35:15 -- pm/common@44 -- $ pid=379014 00:06:48.983 22:35:15 -- pm/common@50 -- $ kill -TERM 379014 00:06:48.983 22:35:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.983 22:35:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:48.983 22:35:15 -- pm/common@44 -- $ pid=379015 00:06:48.983 22:35:15 -- pm/common@50 -- $ kill -TERM 379015 00:06:48.983 22:35:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.983 22:35:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:48.983 22:35:15 -- pm/common@44 -- $ pid=379017 00:06:48.983 22:35:15 -- pm/common@50 -- $ kill -TERM 379017 00:06:48.983 22:35:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.983 22:35:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:48.983 22:35:15 -- pm/common@44 -- $ pid=379041 00:06:48.983 22:35:15 -- pm/common@50 -- $ sudo -E kill -TERM 379041 00:06:48.983 22:35:15 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.983 22:35:15 -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.983 22:35:15 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.244 22:35:16 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.245 22:35:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.245 22:35:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.245 22:35:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.245 22:35:16 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.245 22:35:16 -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.245 22:35:16 -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.245 22:35:16 -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.245 22:35:16 -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.245 22:35:16 -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.245 22:35:16 -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.245 22:35:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.245 22:35:16 -- scripts/common.sh@344 -- # case "$op" in 00:06:49.245 22:35:16 -- scripts/common.sh@345 -- # : 1 00:06:49.245 22:35:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.245 22:35:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.245 22:35:16 -- scripts/common.sh@365 -- # decimal 1 00:06:49.245 22:35:16 -- scripts/common.sh@353 -- # local d=1 00:06:49.245 22:35:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.245 22:35:16 -- scripts/common.sh@355 -- # echo 1 00:06:49.245 22:35:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.245 22:35:16 -- scripts/common.sh@366 -- # decimal 2 00:06:49.245 22:35:16 -- scripts/common.sh@353 -- # local d=2 00:06:49.245 22:35:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.245 22:35:16 -- scripts/common.sh@355 -- # echo 2 00:06:49.245 22:35:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.245 22:35:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.245 22:35:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.245 22:35:16 -- scripts/common.sh@368 -- # return 0 00:06:49.245 22:35:16 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.245 22:35:16 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.245 --rc genhtml_branch_coverage=1 00:06:49.245 --rc genhtml_function_coverage=1 00:06:49.245 --rc genhtml_legend=1 00:06:49.245 --rc geninfo_all_blocks=1 00:06:49.245 --rc geninfo_unexecuted_blocks=1 00:06:49.245 00:06:49.245 ' 00:06:49.245 22:35:16 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.245 --rc genhtml_branch_coverage=1 00:06:49.245 --rc genhtml_function_coverage=1 00:06:49.245 --rc genhtml_legend=1 00:06:49.245 --rc geninfo_all_blocks=1 00:06:49.245 --rc geninfo_unexecuted_blocks=1 00:06:49.245 00:06:49.245 ' 00:06:49.245 22:35:16 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.245 --rc genhtml_branch_coverage=1 00:06:49.245 --rc genhtml_function_coverage=1 00:06:49.245 --rc genhtml_legend=1 00:06:49.245 --rc geninfo_all_blocks=1 00:06:49.245 --rc geninfo_unexecuted_blocks=1 00:06:49.245 00:06:49.245 ' 00:06:49.245 22:35:16 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.245 --rc genhtml_branch_coverage=1 00:06:49.245 --rc genhtml_function_coverage=1 00:06:49.245 --rc genhtml_legend=1 00:06:49.245 --rc geninfo_all_blocks=1 00:06:49.245 --rc geninfo_unexecuted_blocks=1 00:06:49.245 00:06:49.245 ' 00:06:49.245 22:35:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.245 22:35:16 -- nvmf/common.sh@7 -- # uname -s 00:06:49.245 22:35:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.245 22:35:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.245 22:35:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.245 22:35:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.245 22:35:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.245 22:35:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.245 22:35:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.245 22:35:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.245 22:35:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.245 22:35:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.245 22:35:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.245 22:35:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.245 22:35:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.245 22:35:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.245 22:35:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.245 22:35:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.245 22:35:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.245 22:35:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.245 22:35:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.245 22:35:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.245 22:35:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.245 22:35:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.245 22:35:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.245 22:35:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.245 22:35:16 -- paths/export.sh@5 -- # export PATH 00:06:49.245 22:35:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.245 22:35:16 -- nvmf/common.sh@51 -- # : 0 00:06:49.245 22:35:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.245 22:35:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.245 22:35:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.245 22:35:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.245 22:35:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.245 22:35:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.245 22:35:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.245 22:35:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.245 22:35:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.245 22:35:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:49.245 22:35:16 -- spdk/autotest.sh@32 -- # uname -s 00:06:49.245 22:35:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:49.245 22:35:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:49.245 22:35:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:49.245 22:35:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:49.245 22:35:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:49.245 22:35:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:49.245 22:35:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:49.245 22:35:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:49.245 22:35:16 -- spdk/autotest.sh@48 -- # udevadm_pid=444564 00:06:49.245 22:35:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:49.245 22:35:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:49.245 22:35:16 -- pm/common@17 -- # local monitor 00:06:49.246 22:35:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.246 22:35:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.246 22:35:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.246 22:35:16 -- pm/common@21 -- # date +%s 00:06:49.246 22:35:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.246 22:35:16 -- pm/common@21 -- # date +%s 00:06:49.246 22:35:16 -- pm/common@25 -- # sleep 1 00:06:49.246 22:35:16 -- pm/common@21 -- # date +%s 00:06:49.246 22:35:16 -- pm/common@21 -- # date +%s 00:06:49.246 22:35:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727728516 00:06:49.246 22:35:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727728516 00:06:49.246 22:35:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727728516 00:06:49.246 22:35:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727728516 00:06:49.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727728516_collect-vmstat.pm.log 00:06:49.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727728516_collect-cpu-load.pm.log 00:06:49.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727728516_collect-cpu-temp.pm.log 00:06:49.246 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727728516_collect-bmc-pm.bmc.pm.log 00:06:50.191 22:35:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:50.191 22:35:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:50.191 22:35:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.191 22:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:50.191 22:35:17 -- spdk/autotest.sh@59 -- # create_test_list 00:06:50.191 22:35:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:50.191 22:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:50.191 22:35:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:50.191 22:35:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:50.191 22:35:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:50.191 22:35:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:50.191 22:35:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:50.191 22:35:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:50.191 22:35:17 -- common/autotest_common.sh@1455 -- # uname 00:06:50.191 22:35:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:50.191 22:35:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:50.451 22:35:17 -- common/autotest_common.sh@1475 -- # uname 00:06:50.451 22:35:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:50.451 22:35:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:50.451 22:35:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:50.451 lcov: LCOV version 1.15 00:06:50.451 22:35:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:05.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:05.367 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:23.496 22:35:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:23.496 22:35:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.496 22:35:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.496 22:35:47 -- spdk/autotest.sh@78 -- # rm -f 00:07:23.496 22:35:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:24.068 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:07:24.068 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:65:00.0 (144d a80a): Already using the nvme driver 00:07:24.329 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:07:24.329 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:07:24.901 22:35:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:24.901 22:35:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:24.901 22:35:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:24.901 22:35:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:24.901 22:35:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.901 22:35:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:24.901 22:35:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:24.901 22:35:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:24.901 22:35:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.901 22:35:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:24.901 22:35:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:24.901 22:35:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:24.901 22:35:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:24.901 22:35:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:24.901 22:35:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:24.901 No valid GPT data, bailing 00:07:24.901 22:35:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:24.901 22:35:51 -- scripts/common.sh@394 -- # pt= 00:07:24.901 22:35:51 -- scripts/common.sh@395 -- # return 1 00:07:24.901 22:35:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:24.901 1+0 records in 00:07:24.901 1+0 records out 00:07:24.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190723 s, 550 MB/s 00:07:24.901 22:35:51 -- spdk/autotest.sh@105 -- # sync 00:07:24.901 22:35:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:24.902 22:35:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:24.902 22:35:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:34.911 22:36:00 -- spdk/autotest.sh@111 -- # uname -s 00:07:34.911 22:36:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:34.911 22:36:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:34.911 22:36:00 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:36.828 Hugepages 00:07:36.828 node hugesize free / total 00:07:37.090 node0 1048576kB 0 / 0 00:07:37.090 node0 2048kB 0 / 0 00:07:37.090 node1 1048576kB 0 / 0 00:07:37.090 node1 2048kB 0 / 0 00:07:37.090 00:07:37.090 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:37.090 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:37.090 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:37.090 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:37.090 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:37.090 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:37.090 22:36:04 -- spdk/autotest.sh@117 -- # uname -s 00:07:37.090 22:36:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:37.090 22:36:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:37.090 22:36:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:41.302 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:41.302 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:42.687 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:42.948 22:36:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:44.332 22:36:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:44.332 22:36:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:44.332 22:36:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:44.332 22:36:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:44.332 22:36:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:44.332 22:36:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:44.332 22:36:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:44.332 22:36:10 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:44.332 22:36:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:44.332 22:36:11 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:44.332 22:36:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:44.332 22:36:11 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:47.635 Waiting for block devices as requested 00:07:47.635 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:47.635 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:47.897 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:47.897 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:47.897 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:48.253 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:48.253 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:48.253 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:48.253 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:07:48.596 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:48.596 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:48.596 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:48.857 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:48.857 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:48.857 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:49.118 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:49.118 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:49.378 22:36:16 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:49.378 22:36:16 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:07:49.378 22:36:16 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:07:49.378 22:36:16 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:49.378 22:36:16 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:49.378 22:36:16 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:49.378 22:36:16 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:07:49.378 22:36:16 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:49.378 22:36:16 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:49.378 22:36:16 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:49.378 22:36:16 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:49.378 22:36:16 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:49.378 22:36:16 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:49.378 22:36:16 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:49.378 22:36:16 -- common/autotest_common.sh@1541 -- # continue 00:07:49.378 22:36:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:49.378 22:36:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.378 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:49.378 22:36:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:49.378 22:36:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.378 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:49.378 22:36:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:53.584 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:53.584 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:53.584 22:36:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:53.584 22:36:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.584 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:07:53.584 22:36:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:53.584 22:36:20 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:53.584 22:36:20 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:53.584 22:36:20 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:53.584 22:36:20 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:53.584 22:36:20 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:53.584 22:36:20 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:53.584 22:36:20 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:53.584 22:36:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:53.584 22:36:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:53.584 22:36:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.584 22:36:20 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:53.584 22:36:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:53.845 22:36:20 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:53.845 22:36:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:53.845 22:36:20 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:53.845 22:36:20 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:07:53.845 22:36:20 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:07:53.845 22:36:20 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:07:53.845 22:36:20 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:53.845 22:36:20 -- common/autotest_common.sh@1570 -- # return 0 00:07:53.845 22:36:20 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:53.845 22:36:20 -- common/autotest_common.sh@1578 -- # return 0 00:07:53.845 22:36:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:53.845 22:36:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:53.845 22:36:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:53.845 22:36:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:53.845 22:36:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:53.845 22:36:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.845 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:07:53.845 22:36:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:53.845 22:36:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:53.845 22:36:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.845 22:36:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.845 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:07:53.845 ************************************ 00:07:53.845 START TEST env 00:07:53.845 ************************************ 00:07:53.845 22:36:20 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:53.845 * Looking for test storage... 00:07:53.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:53.845 22:36:20 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:53.845 22:36:20 env -- common/autotest_common.sh@1681 -- # lcov --version 00:07:53.845 22:36:20 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.105 22:36:20 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.105 22:36:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.105 22:36:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.105 22:36:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.105 22:36:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.105 22:36:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.105 22:36:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.105 22:36:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.105 22:36:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.105 22:36:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.105 22:36:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.105 22:36:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.105 22:36:20 env -- scripts/common.sh@344 -- # case "$op" in 00:07:54.105 22:36:20 env -- scripts/common.sh@345 -- # : 1 00:07:54.105 22:36:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.105 22:36:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.105 22:36:20 env -- scripts/common.sh@365 -- # decimal 1 00:07:54.105 22:36:20 env -- scripts/common.sh@353 -- # local d=1 00:07:54.105 22:36:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.105 22:36:20 env -- scripts/common.sh@355 -- # echo 1 00:07:54.105 22:36:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.105 22:36:20 env -- scripts/common.sh@366 -- # decimal 2 00:07:54.105 22:36:20 env -- scripts/common.sh@353 -- # local d=2 00:07:54.105 22:36:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.105 22:36:20 env -- scripts/common.sh@355 -- # echo 2 00:07:54.105 22:36:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.105 22:36:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.105 22:36:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.105 22:36:20 env -- scripts/common.sh@368 -- # return 0 00:07:54.105 22:36:20 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.105 22:36:20 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.105 --rc genhtml_branch_coverage=1 00:07:54.105 --rc genhtml_function_coverage=1 00:07:54.105 --rc genhtml_legend=1 00:07:54.105 --rc geninfo_all_blocks=1 00:07:54.105 --rc geninfo_unexecuted_blocks=1 00:07:54.105 00:07:54.105 ' 00:07:54.105 22:36:20 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.105 --rc genhtml_branch_coverage=1 00:07:54.105 --rc genhtml_function_coverage=1 00:07:54.105 --rc genhtml_legend=1 00:07:54.105 --rc geninfo_all_blocks=1 00:07:54.105 --rc geninfo_unexecuted_blocks=1 00:07:54.105 00:07:54.105 ' 00:07:54.105 22:36:20 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.106 --rc genhtml_branch_coverage=1 00:07:54.106 --rc genhtml_function_coverage=1 00:07:54.106 --rc genhtml_legend=1 00:07:54.106 --rc geninfo_all_blocks=1 00:07:54.106 --rc geninfo_unexecuted_blocks=1 00:07:54.106 00:07:54.106 ' 00:07:54.106 22:36:20 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.106 --rc genhtml_branch_coverage=1 00:07:54.106 --rc genhtml_function_coverage=1 00:07:54.106 --rc genhtml_legend=1 00:07:54.106 --rc geninfo_all_blocks=1 00:07:54.106 --rc geninfo_unexecuted_blocks=1 00:07:54.106 00:07:54.106 ' 00:07:54.106 22:36:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:54.106 22:36:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.106 22:36:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.106 22:36:20 env -- common/autotest_common.sh@10 -- # set +x 00:07:54.106 ************************************ 00:07:54.106 START TEST env_memory 00:07:54.106 ************************************ 00:07:54.106 22:36:20 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:54.106 00:07:54.106 00:07:54.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.106 http://cunit.sourceforge.net/ 00:07:54.106 00:07:54.106 00:07:54.106 Suite: memory 00:07:54.106 Test: alloc and free memory map ...[2024-09-30 22:36:21.016931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:54.106 passed 00:07:54.106 Test: mem map translation ...[2024-09-30 22:36:21.042473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:54.106 [2024-09-30 22:36:21.042514] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:54.106 [2024-09-30 22:36:21.042560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:54.106 [2024-09-30 22:36:21.042567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:54.106 passed 00:07:54.106 Test: mem map registration ...[2024-09-30 22:36:21.097724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:54.106 [2024-09-30 22:36:21.097743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:54.106 passed 00:07:54.367 Test: mem map adjacent registrations ...passed 00:07:54.367 00:07:54.367 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.367 suites 1 1 n/a 0 0 00:07:54.367 tests 4 4 4 0 0 00:07:54.367 asserts 152 152 152 0 n/a 00:07:54.367 00:07:54.367 Elapsed time = 0.195 seconds 00:07:54.367 00:07:54.367 real 0m0.209s 00:07:54.367 user 0m0.196s 00:07:54.367 sys 0m0.013s 00:07:54.367 22:36:21 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.367 22:36:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:54.367 ************************************ 00:07:54.367 END TEST env_memory 00:07:54.367 ************************************ 00:07:54.367 22:36:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:54.367 22:36:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.367 22:36:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.367 22:36:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:54.367 ************************************ 00:07:54.367 START TEST env_vtophys 00:07:54.367 ************************************ 00:07:54.367 22:36:21 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:54.367 EAL: lib.eal log level changed from notice to debug 00:07:54.367 EAL: Detected lcore 0 as core 0 on socket 0 00:07:54.367 EAL: Detected lcore 1 as core 1 on socket 0 00:07:54.367 EAL: Detected lcore 2 as core 2 on socket 0 00:07:54.367 EAL: Detected lcore 3 as core 3 on socket 0 00:07:54.367 EAL: Detected lcore 4 as core 4 on socket 0 00:07:54.367 EAL: Detected lcore 5 as core 5 on socket 0 00:07:54.367 EAL: Detected lcore 6 as core 6 on socket 0 00:07:54.367 EAL: Detected lcore 7 as core 7 on socket 0 00:07:54.367 EAL: Detected lcore 8 as core 8 on socket 0 00:07:54.367 EAL: Detected lcore 9 as core 9 on socket 0 00:07:54.367 EAL: Detected lcore 10 as core 10 on socket 0 00:07:54.367 EAL: Detected lcore 11 as core 11 on socket 0 00:07:54.367 EAL: Detected lcore 12 as core 12 on socket 0 00:07:54.367 EAL: Detected lcore 13 as core 13 on socket 0 00:07:54.367 EAL: Detected lcore 14 as core 14 on socket 0 00:07:54.367 EAL: Detected lcore 15 as core 15 on socket 0 00:07:54.367 EAL: Detected lcore 16 as core 16 on socket 0 00:07:54.367 EAL: Detected lcore 17 as core 17 on socket 0 00:07:54.367 EAL: Detected lcore 18 as core 18 on socket 0 00:07:54.367 EAL: Detected lcore 19 as core 19 on socket 0 00:07:54.367 EAL: Detected lcore 20 as core 20 on socket 0 00:07:54.367 EAL: Detected lcore 21 as core 21 on socket 0 00:07:54.367 EAL: Detected lcore 22 as core 22 on socket 0 00:07:54.367 EAL: Detected lcore 23 as core 23 on socket 0 00:07:54.367 EAL: Detected lcore 24 as core 24 on socket 0 00:07:54.367 EAL: Detected lcore 25 as core 25 on socket 0 00:07:54.367 EAL: Detected lcore 26 as core 26 on socket 0 00:07:54.367 EAL: Detected lcore 27 as core 27 on socket 0 00:07:54.367 EAL: Detected lcore 28 as core 28 on socket 0 00:07:54.367 EAL: Detected lcore 29 as core 29 on socket 0 00:07:54.367 EAL: Detected lcore 30 as core 30 on socket 0 00:07:54.367 EAL: Detected lcore 31 as core 31 on socket 0 00:07:54.367 EAL: Detected lcore 32 as core 32 on socket 0 00:07:54.367 EAL: Detected lcore 33 as core 33 on socket 0 00:07:54.367 EAL: Detected lcore 34 as core 34 on socket 0 00:07:54.367 EAL: Detected lcore 35 as core 35 on socket 0 00:07:54.367 EAL: Detected lcore 36 as core 0 on socket 1 00:07:54.367 EAL: Detected lcore 37 as core 1 on socket 1 00:07:54.367 EAL: Detected lcore 38 as core 2 on socket 1 00:07:54.367 EAL: Detected lcore 39 as core 3 on socket 1 00:07:54.367 EAL: Detected lcore 40 as core 4 on socket 1 00:07:54.367 EAL: Detected lcore 41 as core 5 on socket 1 00:07:54.367 EAL: Detected lcore 42 as core 6 on socket 1 00:07:54.367 EAL: Detected lcore 43 as core 7 on socket 1 00:07:54.367 EAL: Detected lcore 44 as core 8 on socket 1 00:07:54.367 EAL: Detected lcore 45 as core 9 on socket 1 00:07:54.367 EAL: Detected lcore 46 as core 10 on socket 1 00:07:54.367 EAL: Detected lcore 47 as core 11 on socket 1 00:07:54.367 EAL: Detected lcore 48 as core 12 on socket 1 00:07:54.367 EAL: Detected lcore 49 as core 13 on socket 1 00:07:54.367 EAL: Detected lcore 50 as core 14 on socket 1 00:07:54.368 EAL: Detected lcore 51 as core 15 on socket 1 00:07:54.368 EAL: Detected lcore 52 as core 16 on socket 1 00:07:54.368 EAL: Detected lcore 53 as core 17 on socket 1 00:07:54.368 EAL: Detected lcore 54 as core 18 on socket 1 00:07:54.368 EAL: Detected lcore 55 as core 19 on socket 1 00:07:54.368 EAL: Detected lcore 56 as core 20 on socket 1 00:07:54.368 EAL: Detected lcore 57 as core 21 on socket 1 00:07:54.368 EAL: Detected lcore 58 as core 22 on socket 1 00:07:54.368 EAL: Detected lcore 59 as core 23 on socket 1 00:07:54.368 EAL: Detected lcore 60 as core 24 on socket 1 00:07:54.368 EAL: Detected lcore 61 as core 25 on socket 1 00:07:54.368 EAL: Detected lcore 62 as core 26 on socket 1 00:07:54.368 EAL: Detected lcore 63 as core 27 on socket 1 00:07:54.368 EAL: Detected lcore 64 as core 28 on socket 1 00:07:54.368 EAL: Detected lcore 65 as core 29 on socket 1 00:07:54.368 EAL: Detected lcore 66 as core 30 on socket 1 00:07:54.368 EAL: Detected lcore 67 as core 31 on socket 1 00:07:54.368 EAL: Detected lcore 68 as core 32 on socket 1 00:07:54.368 EAL: Detected lcore 69 as core 33 on socket 1 00:07:54.368 EAL: Detected lcore 70 as core 34 on socket 1 00:07:54.368 EAL: Detected lcore 71 as core 35 on socket 1 00:07:54.368 EAL: Detected lcore 72 as core 0 on socket 0 00:07:54.368 EAL: Detected lcore 73 as core 1 on socket 0 00:07:54.368 EAL: Detected lcore 74 as core 2 on socket 0 00:07:54.368 EAL: Detected lcore 75 as core 3 on socket 0 00:07:54.368 EAL: Detected lcore 76 as core 4 on socket 0 00:07:54.368 EAL: Detected lcore 77 as core 5 on socket 0 00:07:54.368 EAL: Detected lcore 78 as core 6 on socket 0 00:07:54.368 EAL: Detected lcore 79 as core 7 on socket 0 00:07:54.368 EAL: Detected lcore 80 as core 8 on socket 0 00:07:54.368 EAL: Detected lcore 81 as core 9 on socket 0 00:07:54.368 EAL: Detected lcore 82 as core 10 on socket 0 00:07:54.368 EAL: Detected lcore 83 as core 11 on socket 0 00:07:54.368 EAL: Detected lcore 84 as core 12 on socket 0 00:07:54.368 EAL: Detected lcore 85 as core 13 on socket 0 00:07:54.368 EAL: Detected lcore 86 as core 14 on socket 0 00:07:54.368 EAL: Detected lcore 87 as core 15 on socket 0 00:07:54.368 EAL: Detected lcore 88 as core 16 on socket 0 00:07:54.368 EAL: Detected lcore 89 as core 17 on socket 0 00:07:54.368 EAL: Detected lcore 90 as core 18 on socket 0 00:07:54.368 EAL: Detected lcore 91 as core 19 on socket 0 00:07:54.368 EAL: Detected lcore 92 as core 20 on socket 0 00:07:54.368 EAL: Detected lcore 93 as core 21 on socket 0 00:07:54.368 EAL: Detected lcore 94 as core 22 on socket 0 00:07:54.368 EAL: Detected lcore 95 as core 23 on socket 0 00:07:54.368 EAL: Detected lcore 96 as core 24 on socket 0 00:07:54.368 EAL: Detected lcore 97 as core 25 on socket 0 00:07:54.368 EAL: Detected lcore 98 as core 26 on socket 0 00:07:54.368 EAL: Detected lcore 99 as core 27 on socket 0 00:07:54.368 EAL: Detected lcore 100 as core 28 on socket 0 00:07:54.368 EAL: Detected lcore 101 as core 29 on socket 0 00:07:54.368 EAL: Detected lcore 102 as core 30 on socket 0 00:07:54.368 EAL: Detected lcore 103 as core 31 on socket 0 00:07:54.368 EAL: Detected lcore 104 as core 32 on socket 0 00:07:54.368 EAL: Detected lcore 105 as core 33 on socket 0 00:07:54.368 EAL: Detected lcore 106 as core 34 on socket 0 00:07:54.368 EAL: Detected lcore 107 as core 35 on socket 0 00:07:54.368 EAL: Detected lcore 108 as core 0 on socket 1 00:07:54.368 EAL: Detected lcore 109 as core 1 on socket 1 00:07:54.368 EAL: Detected lcore 110 as core 2 on socket 1 00:07:54.368 EAL: Detected lcore 111 as core 3 on socket 1 00:07:54.368 EAL: Detected lcore 112 as core 4 on socket 1 00:07:54.368 EAL: Detected lcore 113 as core 5 on socket 1 00:07:54.368 EAL: Detected lcore 114 as core 6 on socket 1 00:07:54.368 EAL: Detected lcore 115 as core 7 on socket 1 00:07:54.368 EAL: Detected lcore 116 as core 8 on socket 1 00:07:54.368 EAL: Detected lcore 117 as core 9 on socket 1 00:07:54.368 EAL: Detected lcore 118 as core 10 on socket 1 00:07:54.368 EAL: Detected lcore 119 as core 11 on socket 1 00:07:54.368 EAL: Detected lcore 120 as core 12 on socket 1 00:07:54.368 EAL: Detected lcore 121 as core 13 on socket 1 00:07:54.368 EAL: Detected lcore 122 as core 14 on socket 1 00:07:54.368 EAL: Detected lcore 123 as core 15 on socket 1 00:07:54.368 EAL: Detected lcore 124 as core 16 on socket 1 00:07:54.368 EAL: Detected lcore 125 as core 17 on socket 1 00:07:54.368 EAL: Detected lcore 126 as core 18 on socket 1 00:07:54.368 EAL: Detected lcore 127 as core 19 on socket 1 00:07:54.368 EAL: Skipped lcore 128 as core 20 on socket 1 00:07:54.368 EAL: Skipped lcore 129 as core 21 on socket 1 00:07:54.368 EAL: Skipped lcore 130 as core 22 on socket 1 00:07:54.368 EAL: Skipped lcore 131 as core 23 on socket 1 00:07:54.368 EAL: Skipped lcore 132 as core 24 on socket 1 00:07:54.368 EAL: Skipped lcore 133 as core 25 on socket 1 00:07:54.368 EAL: Skipped lcore 134 as core 26 on socket 1 00:07:54.368 EAL: Skipped lcore 135 as core 27 on socket 1 00:07:54.368 EAL: Skipped lcore 136 as core 28 on socket 1 00:07:54.368 EAL: Skipped lcore 137 as core 29 on socket 1 00:07:54.368 EAL: Skipped lcore 138 as core 30 on socket 1 00:07:54.368 EAL: Skipped lcore 139 as core 31 on socket 1 00:07:54.368 EAL: Skipped lcore 140 as core 32 on socket 1 00:07:54.368 EAL: Skipped lcore 141 as core 33 on socket 1 00:07:54.368 EAL: Skipped lcore 142 as core 34 on socket 1 00:07:54.368 EAL: Skipped lcore 143 as core 35 on socket 1 00:07:54.368 EAL: Maximum logical cores by configuration: 128 00:07:54.368 EAL: Detected CPU lcores: 128 00:07:54.368 EAL: Detected NUMA nodes: 2 00:07:54.368 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:54.368 EAL: Detected shared linkage of DPDK 00:07:54.368 EAL: No shared files mode enabled, IPC will be disabled 00:07:54.368 EAL: Bus pci wants IOVA as 'DC' 00:07:54.368 EAL: Buses did not request a specific IOVA mode. 00:07:54.368 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:54.368 EAL: Selected IOVA mode 'VA' 00:07:54.368 EAL: Probing VFIO support... 00:07:54.368 EAL: IOMMU type 1 (Type 1) is supported 00:07:54.368 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:54.368 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:54.368 EAL: VFIO support initialized 00:07:54.368 EAL: Ask a virtual area of 0x2e000 bytes 00:07:54.368 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:54.368 EAL: Setting up physically contiguous memory... 00:07:54.368 EAL: Setting maximum number of open files to 524288 00:07:54.368 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:54.368 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:54.368 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:54.368 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:54.368 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.368 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:54.368 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:54.368 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.368 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:54.368 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:54.368 EAL: Hugepages will be freed exactly as allocated. 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: TSC frequency is ~2400000 KHz 00:07:54.368 EAL: Main lcore 0 is ready (tid=7f8e6650da00;cpuset=[0]) 00:07:54.368 EAL: Trying to obtain current memory policy. 00:07:54.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.368 EAL: Restoring previous memory policy: 0 00:07:54.368 EAL: request: mp_malloc_sync 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: Heap on socket 0 was expanded by 2MB 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:54.368 EAL: Mem event callback 'spdk:(nil)' registered 00:07:54.368 00:07:54.368 00:07:54.368 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.368 http://cunit.sourceforge.net/ 00:07:54.368 00:07:54.368 00:07:54.368 Suite: components_suite 00:07:54.368 Test: vtophys_malloc_test ...passed 00:07:54.368 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:54.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.368 EAL: Restoring previous memory policy: 4 00:07:54.368 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.368 EAL: request: mp_malloc_sync 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: Heap on socket 0 was expanded by 4MB 00:07:54.368 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.368 EAL: request: mp_malloc_sync 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: Heap on socket 0 was shrunk by 4MB 00:07:54.368 EAL: Trying to obtain current memory policy. 00:07:54.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.368 EAL: Restoring previous memory policy: 4 00:07:54.368 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.368 EAL: request: mp_malloc_sync 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: Heap on socket 0 was expanded by 6MB 00:07:54.368 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.368 EAL: request: mp_malloc_sync 00:07:54.368 EAL: No shared files mode enabled, IPC is disabled 00:07:54.368 EAL: Heap on socket 0 was shrunk by 6MB 00:07:54.368 EAL: Trying to obtain current memory policy. 00:07:54.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.368 EAL: Restoring previous memory policy: 4 00:07:54.369 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.369 EAL: request: mp_malloc_sync 00:07:54.369 EAL: No shared files mode enabled, IPC is disabled 00:07:54.369 EAL: Heap on socket 0 was expanded by 10MB 00:07:54.369 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.369 EAL: request: mp_malloc_sync 00:07:54.369 EAL: No shared files mode enabled, IPC is disabled 00:07:54.369 EAL: Heap on socket 0 was shrunk by 10MB 00:07:54.369 EAL: Trying to obtain current memory policy. 00:07:54.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.369 EAL: Restoring previous memory policy: 4 00:07:54.369 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.369 EAL: request: mp_malloc_sync 00:07:54.369 EAL: No shared files mode enabled, IPC is disabled 00:07:54.369 EAL: Heap on socket 0 was expanded by 18MB 00:07:54.369 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.369 EAL: request: mp_malloc_sync 00:07:54.369 EAL: No shared files mode enabled, IPC is disabled 00:07:54.369 EAL: Heap on socket 0 was shrunk by 18MB 00:07:54.369 EAL: Trying to obtain current memory policy. 00:07:54.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.629 EAL: Restoring previous memory policy: 4 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was expanded by 34MB 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was shrunk by 34MB 00:07:54.629 EAL: Trying to obtain current memory policy. 00:07:54.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.629 EAL: Restoring previous memory policy: 4 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was expanded by 66MB 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was shrunk by 66MB 00:07:54.629 EAL: Trying to obtain current memory policy. 00:07:54.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.629 EAL: Restoring previous memory policy: 4 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was expanded by 130MB 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was shrunk by 130MB 00:07:54.629 EAL: Trying to obtain current memory policy. 00:07:54.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.629 EAL: Restoring previous memory policy: 4 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was expanded by 258MB 00:07:54.629 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.629 EAL: request: mp_malloc_sync 00:07:54.629 EAL: No shared files mode enabled, IPC is disabled 00:07:54.629 EAL: Heap on socket 0 was shrunk by 258MB 00:07:54.630 EAL: Trying to obtain current memory policy. 00:07:54.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.630 EAL: Restoring previous memory policy: 4 00:07:54.630 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.630 EAL: request: mp_malloc_sync 00:07:54.630 EAL: No shared files mode enabled, IPC is disabled 00:07:54.630 EAL: Heap on socket 0 was expanded by 514MB 00:07:54.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.890 EAL: request: mp_malloc_sync 00:07:54.890 EAL: No shared files mode enabled, IPC is disabled 00:07:54.890 EAL: Heap on socket 0 was shrunk by 514MB 00:07:54.890 EAL: Trying to obtain current memory policy. 00:07:54.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.890 EAL: Restoring previous memory policy: 4 00:07:54.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.890 EAL: request: mp_malloc_sync 00:07:54.890 EAL: No shared files mode enabled, IPC is disabled 00:07:54.890 EAL: Heap on socket 0 was expanded by 1026MB 00:07:55.150 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.150 EAL: request: mp_malloc_sync 00:07:55.150 EAL: No shared files mode enabled, IPC is disabled 00:07:55.150 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:55.150 passed 00:07:55.150 00:07:55.150 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.150 suites 1 1 n/a 0 0 00:07:55.150 tests 2 2 2 0 0 00:07:55.150 asserts 497 497 497 0 n/a 00:07:55.150 00:07:55.150 Elapsed time = 0.685 seconds 00:07:55.150 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.150 EAL: request: mp_malloc_sync 00:07:55.150 EAL: No shared files mode enabled, IPC is disabled 00:07:55.150 EAL: Heap on socket 0 was shrunk by 2MB 00:07:55.150 EAL: No shared files mode enabled, IPC is disabled 00:07:55.150 EAL: No shared files mode enabled, IPC is disabled 00:07:55.150 EAL: No shared files mode enabled, IPC is disabled 00:07:55.150 00:07:55.150 real 0m0.835s 00:07:55.150 user 0m0.425s 00:07:55.150 sys 0m0.373s 00:07:55.150 22:36:22 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.150 22:36:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:55.150 ************************************ 00:07:55.150 END TEST env_vtophys 00:07:55.150 ************************************ 00:07:55.150 22:36:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:55.150 22:36:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.150 22:36:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.150 22:36:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.410 ************************************ 00:07:55.410 START TEST env_pci 00:07:55.410 ************************************ 00:07:55.410 22:36:22 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:55.410 00:07:55.410 00:07:55.410 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.410 http://cunit.sourceforge.net/ 00:07:55.411 00:07:55.411 00:07:55.411 Suite: pci 00:07:55.411 Test: pci_hook ...[2024-09-30 22:36:22.190374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 464765 has claimed it 00:07:55.411 EAL: Cannot find device (10000:00:01.0) 00:07:55.411 EAL: Failed to attach device on primary process 00:07:55.411 passed 00:07:55.411 00:07:55.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.411 suites 1 1 n/a 0 0 00:07:55.411 tests 1 1 1 0 0 00:07:55.411 asserts 25 25 25 0 n/a 00:07:55.411 00:07:55.411 Elapsed time = 0.032 seconds 00:07:55.411 00:07:55.411 real 0m0.053s 00:07:55.411 user 0m0.019s 00:07:55.411 sys 0m0.034s 00:07:55.411 22:36:22 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.411 22:36:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 ************************************ 00:07:55.411 END TEST env_pci 00:07:55.411 ************************************ 00:07:55.411 22:36:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:55.411 22:36:22 env -- env/env.sh@15 -- # uname 00:07:55.411 22:36:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:55.411 22:36:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:55.411 22:36:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:55.411 22:36:22 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:55.411 22:36:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.411 22:36:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 ************************************ 00:07:55.411 START TEST env_dpdk_post_init 00:07:55.411 ************************************ 00:07:55.411 22:36:22 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:55.411 EAL: Detected CPU lcores: 128 00:07:55.411 EAL: Detected NUMA nodes: 2 00:07:55.411 EAL: Detected shared linkage of DPDK 00:07:55.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:55.411 EAL: Selected IOVA mode 'VA' 00:07:55.411 EAL: VFIO support initialized 00:07:55.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:55.671 EAL: Using IOMMU type 1 (Type 1) 00:07:55.671 EAL: Ignore mapping IO port bar(1) 00:07:55.671 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:07:55.931 EAL: Ignore mapping IO port bar(1) 00:07:55.931 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:07:56.192 EAL: Ignore mapping IO port bar(1) 00:07:56.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:07:56.452 EAL: Ignore mapping IO port bar(1) 00:07:56.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:07:56.452 EAL: Ignore mapping IO port bar(1) 00:07:56.712 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:07:56.712 EAL: Ignore mapping IO port bar(1) 00:07:56.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:07:56.972 EAL: Ignore mapping IO port bar(1) 00:07:57.232 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:07:57.232 EAL: Ignore mapping IO port bar(1) 00:07:57.232 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:07:57.492 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:07:57.753 EAL: Ignore mapping IO port bar(1) 00:07:57.753 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:07:58.013 EAL: Ignore mapping IO port bar(1) 00:07:58.013 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:07:58.272 EAL: Ignore mapping IO port bar(1) 00:07:58.272 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:07:58.272 EAL: Ignore mapping IO port bar(1) 00:07:58.532 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:07:58.532 EAL: Ignore mapping IO port bar(1) 00:07:58.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:07:58.792 EAL: Ignore mapping IO port bar(1) 00:07:59.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:07:59.052 EAL: Ignore mapping IO port bar(1) 00:07:59.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:07:59.313 EAL: Ignore mapping IO port bar(1) 00:07:59.313 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:07:59.313 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:07:59.313 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:07:59.573 Starting DPDK initialization... 00:07:59.573 Starting SPDK post initialization... 00:07:59.573 SPDK NVMe probe 00:07:59.573 Attaching to 0000:65:00.0 00:07:59.573 Attached to 0000:65:00.0 00:07:59.573 Cleaning up... 00:08:01.485 00:08:01.485 real 0m5.737s 00:08:01.485 user 0m0.109s 00:08:01.485 sys 0m0.184s 00:08:01.485 22:36:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.485 22:36:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:01.485 ************************************ 00:08:01.485 END TEST env_dpdk_post_init 00:08:01.485 ************************************ 00:08:01.485 22:36:28 env -- env/env.sh@26 -- # uname 00:08:01.485 22:36:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:01.485 22:36:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:01.485 22:36:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.485 22:36:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.485 22:36:28 env -- common/autotest_common.sh@10 -- # set +x 00:08:01.485 ************************************ 00:08:01.485 START TEST env_mem_callbacks 00:08:01.485 ************************************ 00:08:01.485 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:01.485 EAL: Detected CPU lcores: 128 00:08:01.485 EAL: Detected NUMA nodes: 2 00:08:01.485 EAL: Detected shared linkage of DPDK 00:08:01.485 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:01.485 EAL: Selected IOVA mode 'VA' 00:08:01.485 EAL: VFIO support initialized 00:08:01.485 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:01.485 00:08:01.485 00:08:01.485 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.485 http://cunit.sourceforge.net/ 00:08:01.485 00:08:01.485 00:08:01.485 Suite: memory 00:08:01.485 Test: test ... 00:08:01.485 register 0x200000200000 2097152 00:08:01.485 malloc 3145728 00:08:01.485 register 0x200000400000 4194304 00:08:01.485 buf 0x200000500000 len 3145728 PASSED 00:08:01.485 malloc 64 00:08:01.485 buf 0x2000004fff40 len 64 PASSED 00:08:01.485 malloc 4194304 00:08:01.485 register 0x200000800000 6291456 00:08:01.485 buf 0x200000a00000 len 4194304 PASSED 00:08:01.485 free 0x200000500000 3145728 00:08:01.485 free 0x2000004fff40 64 00:08:01.485 unregister 0x200000400000 4194304 PASSED 00:08:01.485 free 0x200000a00000 4194304 00:08:01.486 unregister 0x200000800000 6291456 PASSED 00:08:01.486 malloc 8388608 00:08:01.486 register 0x200000400000 10485760 00:08:01.486 buf 0x200000600000 len 8388608 PASSED 00:08:01.486 free 0x200000600000 8388608 00:08:01.486 unregister 0x200000400000 10485760 PASSED 00:08:01.486 passed 00:08:01.486 00:08:01.486 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.486 suites 1 1 n/a 0 0 00:08:01.486 tests 1 1 1 0 0 00:08:01.486 asserts 15 15 15 0 n/a 00:08:01.486 00:08:01.486 Elapsed time = 0.010 seconds 00:08:01.486 00:08:01.486 real 0m0.069s 00:08:01.486 user 0m0.021s 00:08:01.486 sys 0m0.049s 00:08:01.486 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.486 22:36:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:01.486 ************************************ 00:08:01.486 END TEST env_mem_callbacks 00:08:01.486 ************************************ 00:08:01.486 00:08:01.486 real 0m7.526s 00:08:01.486 user 0m1.048s 00:08:01.486 sys 0m1.030s 00:08:01.486 22:36:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.486 22:36:28 env -- common/autotest_common.sh@10 -- # set +x 00:08:01.486 ************************************ 00:08:01.486 END TEST env 00:08:01.486 ************************************ 00:08:01.486 22:36:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:01.486 22:36:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.486 22:36:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.486 22:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:01.486 ************************************ 00:08:01.486 START TEST rpc 00:08:01.486 ************************************ 00:08:01.486 22:36:28 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:01.486 * Looking for test storage... 00:08:01.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:01.486 22:36:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.486 22:36:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.486 22:36:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.747 22:36:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.747 22:36:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.747 22:36:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.747 22:36:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.747 22:36:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.747 22:36:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:01.747 22:36:28 rpc -- scripts/common.sh@345 -- # : 1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.747 22:36:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.747 22:36:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@353 -- # local d=1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.747 22:36:28 rpc -- scripts/common.sh@355 -- # echo 1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.747 22:36:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@353 -- # local d=2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.747 22:36:28 rpc -- scripts/common.sh@355 -- # echo 2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.747 22:36:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.747 22:36:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.747 22:36:28 rpc -- scripts/common.sh@368 -- # return 0 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.747 --rc genhtml_branch_coverage=1 00:08:01.747 --rc genhtml_function_coverage=1 00:08:01.747 --rc genhtml_legend=1 00:08:01.747 --rc geninfo_all_blocks=1 00:08:01.747 --rc geninfo_unexecuted_blocks=1 00:08:01.747 00:08:01.747 ' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.747 --rc genhtml_branch_coverage=1 00:08:01.747 --rc genhtml_function_coverage=1 00:08:01.747 --rc genhtml_legend=1 00:08:01.747 --rc geninfo_all_blocks=1 00:08:01.747 --rc geninfo_unexecuted_blocks=1 00:08:01.747 00:08:01.747 ' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.747 --rc genhtml_branch_coverage=1 00:08:01.747 --rc genhtml_function_coverage=1 00:08:01.747 --rc genhtml_legend=1 00:08:01.747 --rc geninfo_all_blocks=1 00:08:01.747 --rc geninfo_unexecuted_blocks=1 00:08:01.747 00:08:01.747 ' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.747 --rc genhtml_branch_coverage=1 00:08:01.747 --rc genhtml_function_coverage=1 00:08:01.747 --rc genhtml_legend=1 00:08:01.747 --rc geninfo_all_blocks=1 00:08:01.747 --rc geninfo_unexecuted_blocks=1 00:08:01.747 00:08:01.747 ' 00:08:01.747 22:36:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=465991 00:08:01.747 22:36:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.747 22:36:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 465991 00:08:01.747 22:36:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 465991 ']' 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.747 22:36:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.747 [2024-09-30 22:36:28.595543] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:01.747 [2024-09-30 22:36:28.595611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465991 ] 00:08:01.747 [2024-09-30 22:36:28.680285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.009 [2024-09-30 22:36:28.776936] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:02.009 [2024-09-30 22:36:28.777005] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 465991' to capture a snapshot of events at runtime. 00:08:02.009 [2024-09-30 22:36:28.777014] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.009 [2024-09-30 22:36:28.777021] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.009 [2024-09-30 22:36:28.777027] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid465991 for offline analysis/debug. 00:08:02.009 [2024-09-30 22:36:28.777057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.582 22:36:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.582 22:36:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:02.582 22:36:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:02.582 22:36:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:02.582 22:36:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:02.582 22:36:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:02.582 22:36:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.582 22:36:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.582 22:36:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 ************************************ 00:08:02.582 START TEST rpc_integrity 00:08:02.582 ************************************ 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:02.582 { 00:08:02.582 "name": "Malloc0", 00:08:02.582 "aliases": [ 00:08:02.582 "fb045fa0-5707-49ac-81a5-705d4b0a5880" 00:08:02.582 ], 00:08:02.582 "product_name": "Malloc disk", 00:08:02.582 "block_size": 512, 00:08:02.582 "num_blocks": 16384, 00:08:02.582 "uuid": "fb045fa0-5707-49ac-81a5-705d4b0a5880", 00:08:02.582 "assigned_rate_limits": { 00:08:02.582 "rw_ios_per_sec": 0, 00:08:02.582 "rw_mbytes_per_sec": 0, 00:08:02.582 "r_mbytes_per_sec": 0, 00:08:02.582 "w_mbytes_per_sec": 0 00:08:02.582 }, 00:08:02.582 "claimed": false, 00:08:02.582 "zoned": false, 00:08:02.582 "supported_io_types": { 00:08:02.582 "read": true, 00:08:02.582 "write": true, 00:08:02.582 "unmap": true, 00:08:02.582 "flush": true, 00:08:02.582 "reset": true, 00:08:02.582 "nvme_admin": false, 00:08:02.582 "nvme_io": false, 00:08:02.582 "nvme_io_md": false, 00:08:02.582 "write_zeroes": true, 00:08:02.582 "zcopy": true, 00:08:02.582 "get_zone_info": false, 00:08:02.582 "zone_management": false, 00:08:02.582 "zone_append": false, 00:08:02.582 "compare": false, 00:08:02.582 "compare_and_write": false, 00:08:02.582 "abort": true, 00:08:02.582 "seek_hole": false, 00:08:02.582 "seek_data": false, 00:08:02.582 "copy": true, 00:08:02.582 "nvme_iov_md": false 00:08:02.582 }, 00:08:02.582 "memory_domains": [ 00:08:02.582 { 00:08:02.582 "dma_device_id": "system", 00:08:02.582 "dma_device_type": 1 00:08:02.582 }, 00:08:02.582 { 00:08:02.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.582 "dma_device_type": 2 00:08:02.582 } 00:08:02.582 ], 00:08:02.582 "driver_specific": {} 00:08:02.582 } 00:08:02.582 ]' 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-09-30 22:36:29.584241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:02.582 [2024-09-30 22:36:29.584292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.582 [2024-09-30 22:36:29.584307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a21e00 00:08:02.582 [2024-09-30 22:36:29.584315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.582 [2024-09-30 22:36:29.585865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.582 [2024-09-30 22:36:29.585915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:02.582 Passthru0 00:08:02.582 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.582 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:02.583 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.583 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:02.844 { 00:08:02.844 "name": "Malloc0", 00:08:02.844 "aliases": [ 00:08:02.844 "fb045fa0-5707-49ac-81a5-705d4b0a5880" 00:08:02.844 ], 00:08:02.844 "product_name": "Malloc disk", 00:08:02.844 "block_size": 512, 00:08:02.844 "num_blocks": 16384, 00:08:02.844 "uuid": "fb045fa0-5707-49ac-81a5-705d4b0a5880", 00:08:02.844 "assigned_rate_limits": { 00:08:02.844 "rw_ios_per_sec": 0, 00:08:02.844 "rw_mbytes_per_sec": 0, 00:08:02.844 "r_mbytes_per_sec": 0, 00:08:02.844 "w_mbytes_per_sec": 0 00:08:02.844 }, 00:08:02.844 "claimed": true, 00:08:02.844 "claim_type": "exclusive_write", 00:08:02.844 "zoned": false, 00:08:02.844 "supported_io_types": { 00:08:02.844 "read": true, 00:08:02.844 "write": true, 00:08:02.844 "unmap": true, 00:08:02.844 "flush": true, 00:08:02.844 "reset": true, 00:08:02.844 "nvme_admin": false, 00:08:02.844 "nvme_io": false, 00:08:02.844 "nvme_io_md": false, 00:08:02.844 "write_zeroes": true, 00:08:02.844 "zcopy": true, 00:08:02.844 "get_zone_info": false, 00:08:02.844 "zone_management": false, 00:08:02.844 "zone_append": false, 00:08:02.844 "compare": false, 00:08:02.844 "compare_and_write": false, 00:08:02.844 "abort": true, 00:08:02.844 "seek_hole": false, 00:08:02.844 "seek_data": false, 00:08:02.844 "copy": true, 00:08:02.844 "nvme_iov_md": false 00:08:02.844 }, 00:08:02.844 "memory_domains": [ 00:08:02.844 { 00:08:02.844 "dma_device_id": "system", 00:08:02.844 "dma_device_type": 1 00:08:02.844 }, 00:08:02.844 { 00:08:02.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.844 "dma_device_type": 2 00:08:02.844 } 00:08:02.844 ], 00:08:02.844 "driver_specific": {} 00:08:02.844 }, 00:08:02.844 { 00:08:02.844 "name": "Passthru0", 00:08:02.844 "aliases": [ 00:08:02.844 "6239d7ea-e174-5152-9608-535c5e491d05" 00:08:02.844 ], 00:08:02.844 "product_name": "passthru", 00:08:02.844 "block_size": 512, 00:08:02.844 "num_blocks": 16384, 00:08:02.844 "uuid": "6239d7ea-e174-5152-9608-535c5e491d05", 00:08:02.844 "assigned_rate_limits": { 00:08:02.844 "rw_ios_per_sec": 0, 00:08:02.844 "rw_mbytes_per_sec": 0, 00:08:02.844 "r_mbytes_per_sec": 0, 00:08:02.844 "w_mbytes_per_sec": 0 00:08:02.844 }, 00:08:02.844 "claimed": false, 00:08:02.844 "zoned": false, 00:08:02.844 "supported_io_types": { 00:08:02.844 "read": true, 00:08:02.844 "write": true, 00:08:02.844 "unmap": true, 00:08:02.844 "flush": true, 00:08:02.844 "reset": true, 00:08:02.844 "nvme_admin": false, 00:08:02.844 "nvme_io": false, 00:08:02.844 "nvme_io_md": false, 00:08:02.844 "write_zeroes": true, 00:08:02.844 "zcopy": true, 00:08:02.844 "get_zone_info": false, 00:08:02.844 "zone_management": false, 00:08:02.844 "zone_append": false, 00:08:02.844 "compare": false, 00:08:02.844 "compare_and_write": false, 00:08:02.844 "abort": true, 00:08:02.844 "seek_hole": false, 00:08:02.844 "seek_data": false, 00:08:02.844 "copy": true, 00:08:02.844 "nvme_iov_md": false 00:08:02.844 }, 00:08:02.844 "memory_domains": [ 00:08:02.844 { 00:08:02.844 "dma_device_id": "system", 00:08:02.844 "dma_device_type": 1 00:08:02.844 }, 00:08:02.844 { 00:08:02.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.844 "dma_device_type": 2 00:08:02.844 } 00:08:02.844 ], 00:08:02.844 "driver_specific": { 00:08:02.844 "passthru": { 00:08:02.844 "name": "Passthru0", 00:08:02.844 "base_bdev_name": "Malloc0" 00:08:02.844 } 00:08:02.844 } 00:08:02.844 } 00:08:02.844 ]' 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:02.844 22:36:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:02.844 00:08:02.844 real 0m0.297s 00:08:02.844 user 0m0.189s 00:08:02.844 sys 0m0.040s 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 ************************************ 00:08:02.844 END TEST rpc_integrity 00:08:02.844 ************************************ 00:08:02.844 22:36:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:02.844 22:36:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.844 22:36:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.844 22:36:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 ************************************ 00:08:02.844 START TEST rpc_plugins 00:08:02.844 ************************************ 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:02.844 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:02.844 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:02.844 { 00:08:02.844 "name": "Malloc1", 00:08:02.844 "aliases": [ 00:08:02.844 "82f60bd2-8980-455a-80e4-b4e533ac1c0a" 00:08:02.844 ], 00:08:02.844 "product_name": "Malloc disk", 00:08:02.844 "block_size": 4096, 00:08:02.844 "num_blocks": 256, 00:08:02.844 "uuid": "82f60bd2-8980-455a-80e4-b4e533ac1c0a", 00:08:02.844 "assigned_rate_limits": { 00:08:02.844 "rw_ios_per_sec": 0, 00:08:02.844 "rw_mbytes_per_sec": 0, 00:08:02.844 "r_mbytes_per_sec": 0, 00:08:02.844 "w_mbytes_per_sec": 0 00:08:02.844 }, 00:08:02.844 "claimed": false, 00:08:02.844 "zoned": false, 00:08:02.844 "supported_io_types": { 00:08:02.844 "read": true, 00:08:02.844 "write": true, 00:08:02.844 "unmap": true, 00:08:02.844 "flush": true, 00:08:02.844 "reset": true, 00:08:02.844 "nvme_admin": false, 00:08:02.844 "nvme_io": false, 00:08:02.844 "nvme_io_md": false, 00:08:02.844 "write_zeroes": true, 00:08:02.844 "zcopy": true, 00:08:02.844 "get_zone_info": false, 00:08:02.844 "zone_management": false, 00:08:02.844 "zone_append": false, 00:08:02.844 "compare": false, 00:08:02.844 "compare_and_write": false, 00:08:02.844 "abort": true, 00:08:02.844 "seek_hole": false, 00:08:02.844 "seek_data": false, 00:08:02.844 "copy": true, 00:08:02.844 "nvme_iov_md": false 00:08:02.844 }, 00:08:02.844 "memory_domains": [ 00:08:02.844 { 00:08:02.844 "dma_device_id": "system", 00:08:02.844 "dma_device_type": 1 00:08:02.844 }, 00:08:02.844 { 00:08:02.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.844 "dma_device_type": 2 00:08:02.844 } 00:08:02.844 ], 00:08:02.844 "driver_specific": {} 00:08:02.844 } 00:08:02.844 ]' 00:08:02.844 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:03.106 22:36:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:03.106 00:08:03.106 real 0m0.150s 00:08:03.106 user 0m0.092s 00:08:03.106 sys 0m0.025s 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.106 22:36:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:03.106 ************************************ 00:08:03.106 END TEST rpc_plugins 00:08:03.106 ************************************ 00:08:03.106 22:36:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:03.106 22:36:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.106 22:36:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.106 22:36:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.106 ************************************ 00:08:03.106 START TEST rpc_trace_cmd_test 00:08:03.106 ************************************ 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.106 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:03.106 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid465991", 00:08:03.106 "tpoint_group_mask": "0x8", 00:08:03.106 "iscsi_conn": { 00:08:03.106 "mask": "0x2", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "scsi": { 00:08:03.106 "mask": "0x4", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "bdev": { 00:08:03.106 "mask": "0x8", 00:08:03.106 "tpoint_mask": "0xffffffffffffffff" 00:08:03.106 }, 00:08:03.106 "nvmf_rdma": { 00:08:03.106 "mask": "0x10", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "nvmf_tcp": { 00:08:03.106 "mask": "0x20", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "ftl": { 00:08:03.106 "mask": "0x40", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "blobfs": { 00:08:03.106 "mask": "0x80", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "dsa": { 00:08:03.106 "mask": "0x200", 00:08:03.106 "tpoint_mask": "0x0" 00:08:03.106 }, 00:08:03.106 "thread": { 00:08:03.107 "mask": "0x400", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "nvme_pcie": { 00:08:03.107 "mask": "0x800", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "iaa": { 00:08:03.107 "mask": "0x1000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "nvme_tcp": { 00:08:03.107 "mask": "0x2000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "bdev_nvme": { 00:08:03.107 "mask": "0x4000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "sock": { 00:08:03.107 "mask": "0x8000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "blob": { 00:08:03.107 "mask": "0x10000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 }, 00:08:03.107 "bdev_raid": { 00:08:03.107 "mask": "0x20000", 00:08:03.107 "tpoint_mask": "0x0" 00:08:03.107 } 00:08:03.107 }' 00:08:03.107 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:03.107 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:08:03.107 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:03.368 00:08:03.368 real 0m0.255s 00:08:03.368 user 0m0.207s 00:08:03.368 sys 0m0.036s 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.368 22:36:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.368 ************************************ 00:08:03.368 END TEST rpc_trace_cmd_test 00:08:03.368 ************************************ 00:08:03.368 22:36:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:03.368 22:36:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:03.368 22:36:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:03.368 22:36:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.368 22:36:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.368 22:36:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 ************************************ 00:08:03.629 START TEST rpc_daemon_integrity 00:08:03.629 ************************************ 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:03.629 { 00:08:03.629 "name": "Malloc2", 00:08:03.629 "aliases": [ 00:08:03.629 "65192b14-a008-431b-8c69-f714ede20a16" 00:08:03.629 ], 00:08:03.629 "product_name": "Malloc disk", 00:08:03.629 "block_size": 512, 00:08:03.629 "num_blocks": 16384, 00:08:03.629 "uuid": "65192b14-a008-431b-8c69-f714ede20a16", 00:08:03.629 "assigned_rate_limits": { 00:08:03.629 "rw_ios_per_sec": 0, 00:08:03.629 "rw_mbytes_per_sec": 0, 00:08:03.629 "r_mbytes_per_sec": 0, 00:08:03.629 "w_mbytes_per_sec": 0 00:08:03.629 }, 00:08:03.629 "claimed": false, 00:08:03.629 "zoned": false, 00:08:03.629 "supported_io_types": { 00:08:03.629 "read": true, 00:08:03.629 "write": true, 00:08:03.629 "unmap": true, 00:08:03.629 "flush": true, 00:08:03.629 "reset": true, 00:08:03.629 "nvme_admin": false, 00:08:03.629 "nvme_io": false, 00:08:03.629 "nvme_io_md": false, 00:08:03.629 "write_zeroes": true, 00:08:03.629 "zcopy": true, 00:08:03.629 "get_zone_info": false, 00:08:03.629 "zone_management": false, 00:08:03.629 "zone_append": false, 00:08:03.629 "compare": false, 00:08:03.629 "compare_and_write": false, 00:08:03.629 "abort": true, 00:08:03.629 "seek_hole": false, 00:08:03.629 "seek_data": false, 00:08:03.629 "copy": true, 00:08:03.629 "nvme_iov_md": false 00:08:03.629 }, 00:08:03.629 "memory_domains": [ 00:08:03.629 { 00:08:03.629 "dma_device_id": "system", 00:08:03.629 "dma_device_type": 1 00:08:03.629 }, 00:08:03.629 { 00:08:03.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.630 "dma_device_type": 2 00:08:03.630 } 00:08:03.630 ], 00:08:03.630 "driver_specific": {} 00:08:03.630 } 00:08:03.630 ]' 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 [2024-09-30 22:36:30.527003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:03.630 [2024-09-30 22:36:30.527054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.630 [2024-09-30 22:36:30.527068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a22080 00:08:03.630 [2024-09-30 22:36:30.527076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.630 [2024-09-30 22:36:30.528551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.630 [2024-09-30 22:36:30.528590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:03.630 Passthru0 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:03.630 { 00:08:03.630 "name": "Malloc2", 00:08:03.630 "aliases": [ 00:08:03.630 "65192b14-a008-431b-8c69-f714ede20a16" 00:08:03.630 ], 00:08:03.630 "product_name": "Malloc disk", 00:08:03.630 "block_size": 512, 00:08:03.630 "num_blocks": 16384, 00:08:03.630 "uuid": "65192b14-a008-431b-8c69-f714ede20a16", 00:08:03.630 "assigned_rate_limits": { 00:08:03.630 "rw_ios_per_sec": 0, 00:08:03.630 "rw_mbytes_per_sec": 0, 00:08:03.630 "r_mbytes_per_sec": 0, 00:08:03.630 "w_mbytes_per_sec": 0 00:08:03.630 }, 00:08:03.630 "claimed": true, 00:08:03.630 "claim_type": "exclusive_write", 00:08:03.630 "zoned": false, 00:08:03.630 "supported_io_types": { 00:08:03.630 "read": true, 00:08:03.630 "write": true, 00:08:03.630 "unmap": true, 00:08:03.630 "flush": true, 00:08:03.630 "reset": true, 00:08:03.630 "nvme_admin": false, 00:08:03.630 "nvme_io": false, 00:08:03.630 "nvme_io_md": false, 00:08:03.630 "write_zeroes": true, 00:08:03.630 "zcopy": true, 00:08:03.630 "get_zone_info": false, 00:08:03.630 "zone_management": false, 00:08:03.630 "zone_append": false, 00:08:03.630 "compare": false, 00:08:03.630 "compare_and_write": false, 00:08:03.630 "abort": true, 00:08:03.630 "seek_hole": false, 00:08:03.630 "seek_data": false, 00:08:03.630 "copy": true, 00:08:03.630 "nvme_iov_md": false 00:08:03.630 }, 00:08:03.630 "memory_domains": [ 00:08:03.630 { 00:08:03.630 "dma_device_id": "system", 00:08:03.630 "dma_device_type": 1 00:08:03.630 }, 00:08:03.630 { 00:08:03.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.630 "dma_device_type": 2 00:08:03.630 } 00:08:03.630 ], 00:08:03.630 "driver_specific": {} 00:08:03.630 }, 00:08:03.630 { 00:08:03.630 "name": "Passthru0", 00:08:03.630 "aliases": [ 00:08:03.630 "71d2c4fe-a66b-5297-a720-de92f67dfcf7" 00:08:03.630 ], 00:08:03.630 "product_name": "passthru", 00:08:03.630 "block_size": 512, 00:08:03.630 "num_blocks": 16384, 00:08:03.630 "uuid": "71d2c4fe-a66b-5297-a720-de92f67dfcf7", 00:08:03.630 "assigned_rate_limits": { 00:08:03.630 "rw_ios_per_sec": 0, 00:08:03.630 "rw_mbytes_per_sec": 0, 00:08:03.630 "r_mbytes_per_sec": 0, 00:08:03.630 "w_mbytes_per_sec": 0 00:08:03.630 }, 00:08:03.630 "claimed": false, 00:08:03.630 "zoned": false, 00:08:03.630 "supported_io_types": { 00:08:03.630 "read": true, 00:08:03.630 "write": true, 00:08:03.630 "unmap": true, 00:08:03.630 "flush": true, 00:08:03.630 "reset": true, 00:08:03.630 "nvme_admin": false, 00:08:03.630 "nvme_io": false, 00:08:03.630 "nvme_io_md": false, 00:08:03.630 "write_zeroes": true, 00:08:03.630 "zcopy": true, 00:08:03.630 "get_zone_info": false, 00:08:03.630 "zone_management": false, 00:08:03.630 "zone_append": false, 00:08:03.630 "compare": false, 00:08:03.630 "compare_and_write": false, 00:08:03.630 "abort": true, 00:08:03.630 "seek_hole": false, 00:08:03.630 "seek_data": false, 00:08:03.630 "copy": true, 00:08:03.630 "nvme_iov_md": false 00:08:03.630 }, 00:08:03.630 "memory_domains": [ 00:08:03.630 { 00:08:03.630 "dma_device_id": "system", 00:08:03.630 "dma_device_type": 1 00:08:03.630 }, 00:08:03.630 { 00:08:03.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.630 "dma_device_type": 2 00:08:03.630 } 00:08:03.630 ], 00:08:03.630 "driver_specific": { 00:08:03.630 "passthru": { 00:08:03.630 "name": "Passthru0", 00:08:03.630 "base_bdev_name": "Malloc2" 00:08:03.630 } 00:08:03.630 } 00:08:03.630 } 00:08:03.630 ]' 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:03.630 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:03.891 22:36:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:03.891 00:08:03.891 real 0m0.301s 00:08:03.891 user 0m0.189s 00:08:03.891 sys 0m0.044s 00:08:03.891 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.891 22:36:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:03.891 ************************************ 00:08:03.891 END TEST rpc_daemon_integrity 00:08:03.891 ************************************ 00:08:03.891 22:36:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:03.891 22:36:30 rpc -- rpc/rpc.sh@84 -- # killprocess 465991 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@950 -- # '[' -z 465991 ']' 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@954 -- # kill -0 465991 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@955 -- # uname 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465991 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465991' 00:08:03.891 killing process with pid 465991 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@969 -- # kill 465991 00:08:03.891 22:36:30 rpc -- common/autotest_common.sh@974 -- # wait 465991 00:08:04.151 00:08:04.151 real 0m2.723s 00:08:04.151 user 0m3.476s 00:08:04.151 sys 0m0.815s 00:08:04.151 22:36:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.151 22:36:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.151 ************************************ 00:08:04.151 END TEST rpc 00:08:04.151 ************************************ 00:08:04.151 22:36:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:04.151 22:36:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.151 22:36:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.151 22:36:31 -- common/autotest_common.sh@10 -- # set +x 00:08:04.151 ************************************ 00:08:04.151 START TEST skip_rpc 00:08:04.151 ************************************ 00:08:04.151 22:36:31 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:04.411 * Looking for test storage... 00:08:04.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.411 22:36:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:04.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.411 --rc genhtml_branch_coverage=1 00:08:04.411 --rc genhtml_function_coverage=1 00:08:04.411 --rc genhtml_legend=1 00:08:04.411 --rc geninfo_all_blocks=1 00:08:04.411 --rc geninfo_unexecuted_blocks=1 00:08:04.411 00:08:04.411 ' 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:04.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.411 --rc genhtml_branch_coverage=1 00:08:04.411 --rc genhtml_function_coverage=1 00:08:04.411 --rc genhtml_legend=1 00:08:04.411 --rc geninfo_all_blocks=1 00:08:04.411 --rc geninfo_unexecuted_blocks=1 00:08:04.411 00:08:04.411 ' 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:04.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.411 --rc genhtml_branch_coverage=1 00:08:04.411 --rc genhtml_function_coverage=1 00:08:04.411 --rc genhtml_legend=1 00:08:04.411 --rc geninfo_all_blocks=1 00:08:04.411 --rc geninfo_unexecuted_blocks=1 00:08:04.411 00:08:04.411 ' 00:08:04.411 22:36:31 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:04.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.412 --rc genhtml_branch_coverage=1 00:08:04.412 --rc genhtml_function_coverage=1 00:08:04.412 --rc genhtml_legend=1 00:08:04.412 --rc geninfo_all_blocks=1 00:08:04.412 --rc geninfo_unexecuted_blocks=1 00:08:04.412 00:08:04.412 ' 00:08:04.412 22:36:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:04.412 22:36:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:04.412 22:36:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:04.412 22:36:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.412 22:36:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.412 22:36:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.412 ************************************ 00:08:04.412 START TEST skip_rpc 00:08:04.412 ************************************ 00:08:04.412 22:36:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:04.412 22:36:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=466824 00:08:04.412 22:36:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:04.412 22:36:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:04.412 22:36:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:04.672 [2024-09-30 22:36:31.449477] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:04.672 [2024-09-30 22:36:31.449539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466824 ] 00:08:04.672 [2024-09-30 22:36:31.531824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.672 [2024-09-30 22:36:31.625881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 466824 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 466824 ']' 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 466824 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466824 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466824' 00:08:09.960 killing process with pid 466824 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 466824 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 466824 00:08:09.960 00:08:09.960 real 0m5.280s 00:08:09.960 user 0m5.017s 00:08:09.960 sys 0m0.304s 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.960 22:36:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.960 ************************************ 00:08:09.960 END TEST skip_rpc 00:08:09.960 ************************************ 00:08:09.960 22:36:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:09.961 22:36:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.961 22:36:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.961 22:36:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.961 ************************************ 00:08:09.961 START TEST skip_rpc_with_json 00:08:09.961 ************************************ 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=467859 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 467859 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 467859 ']' 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.961 22:36:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.961 [2024-09-30 22:36:36.796379] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:09.961 [2024-09-30 22:36:36.796428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467859 ] 00:08:09.961 [2024-09-30 22:36:36.873240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.961 [2024-09-30 22:36:36.928505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.902 [2024-09-30 22:36:37.596482] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:10.902 request: 00:08:10.902 { 00:08:10.902 "trtype": "tcp", 00:08:10.902 "method": "nvmf_get_transports", 00:08:10.902 "req_id": 1 00:08:10.902 } 00:08:10.902 Got JSON-RPC error response 00:08:10.902 response: 00:08:10.902 { 00:08:10.902 "code": -19, 00:08:10.902 "message": "No such device" 00:08:10.902 } 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.902 [2024-09-30 22:36:37.608581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.902 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:10.902 { 00:08:10.902 "subsystems": [ 00:08:10.902 { 00:08:10.902 "subsystem": "fsdev", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "fsdev_set_opts", 00:08:10.902 "params": { 00:08:10.902 "fsdev_io_pool_size": 65535, 00:08:10.902 "fsdev_io_cache_size": 256 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "vfio_user_target", 00:08:10.902 "config": null 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "keyring", 00:08:10.902 "config": [] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "iobuf", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "iobuf_set_options", 00:08:10.902 "params": { 00:08:10.902 "small_pool_count": 8192, 00:08:10.902 "large_pool_count": 1024, 00:08:10.902 "small_bufsize": 8192, 00:08:10.902 "large_bufsize": 135168 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "sock", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "sock_set_default_impl", 00:08:10.902 "params": { 00:08:10.902 "impl_name": "posix" 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "sock_impl_set_options", 00:08:10.902 "params": { 00:08:10.902 "impl_name": "ssl", 00:08:10.902 "recv_buf_size": 4096, 00:08:10.902 "send_buf_size": 4096, 00:08:10.902 "enable_recv_pipe": true, 00:08:10.902 "enable_quickack": false, 00:08:10.902 "enable_placement_id": 0, 00:08:10.902 "enable_zerocopy_send_server": true, 00:08:10.902 "enable_zerocopy_send_client": false, 00:08:10.902 "zerocopy_threshold": 0, 00:08:10.902 "tls_version": 0, 00:08:10.902 "enable_ktls": false 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "sock_impl_set_options", 00:08:10.902 "params": { 00:08:10.902 "impl_name": "posix", 00:08:10.902 "recv_buf_size": 2097152, 00:08:10.902 "send_buf_size": 2097152, 00:08:10.902 "enable_recv_pipe": true, 00:08:10.902 "enable_quickack": false, 00:08:10.902 "enable_placement_id": 0, 00:08:10.902 "enable_zerocopy_send_server": true, 00:08:10.902 "enable_zerocopy_send_client": false, 00:08:10.902 "zerocopy_threshold": 0, 00:08:10.902 "tls_version": 0, 00:08:10.902 "enable_ktls": false 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "vmd", 00:08:10.902 "config": [] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "accel", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "accel_set_options", 00:08:10.902 "params": { 00:08:10.902 "small_cache_size": 128, 00:08:10.902 "large_cache_size": 16, 00:08:10.902 "task_count": 2048, 00:08:10.902 "sequence_count": 2048, 00:08:10.902 "buf_count": 2048 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "bdev", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "bdev_set_options", 00:08:10.902 "params": { 00:08:10.902 "bdev_io_pool_size": 65535, 00:08:10.902 "bdev_io_cache_size": 256, 00:08:10.902 "bdev_auto_examine": true, 00:08:10.902 "iobuf_small_cache_size": 128, 00:08:10.902 "iobuf_large_cache_size": 16 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "bdev_raid_set_options", 00:08:10.902 "params": { 00:08:10.902 "process_window_size_kb": 1024, 00:08:10.902 "process_max_bandwidth_mb_sec": 0 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "bdev_iscsi_set_options", 00:08:10.902 "params": { 00:08:10.902 "timeout_sec": 30 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "bdev_nvme_set_options", 00:08:10.902 "params": { 00:08:10.902 "action_on_timeout": "none", 00:08:10.902 "timeout_us": 0, 00:08:10.902 "timeout_admin_us": 0, 00:08:10.902 "keep_alive_timeout_ms": 10000, 00:08:10.902 "arbitration_burst": 0, 00:08:10.902 "low_priority_weight": 0, 00:08:10.902 "medium_priority_weight": 0, 00:08:10.902 "high_priority_weight": 0, 00:08:10.902 "nvme_adminq_poll_period_us": 10000, 00:08:10.902 "nvme_ioq_poll_period_us": 0, 00:08:10.902 "io_queue_requests": 0, 00:08:10.902 "delay_cmd_submit": true, 00:08:10.902 "transport_retry_count": 4, 00:08:10.902 "bdev_retry_count": 3, 00:08:10.902 "transport_ack_timeout": 0, 00:08:10.902 "ctrlr_loss_timeout_sec": 0, 00:08:10.902 "reconnect_delay_sec": 0, 00:08:10.902 "fast_io_fail_timeout_sec": 0, 00:08:10.902 "disable_auto_failback": false, 00:08:10.902 "generate_uuids": false, 00:08:10.902 "transport_tos": 0, 00:08:10.902 "nvme_error_stat": false, 00:08:10.902 "rdma_srq_size": 0, 00:08:10.902 "io_path_stat": false, 00:08:10.902 "allow_accel_sequence": false, 00:08:10.902 "rdma_max_cq_size": 0, 00:08:10.902 "rdma_cm_event_timeout_ms": 0, 00:08:10.902 "dhchap_digests": [ 00:08:10.902 "sha256", 00:08:10.902 "sha384", 00:08:10.902 "sha512" 00:08:10.902 ], 00:08:10.902 "dhchap_dhgroups": [ 00:08:10.902 "null", 00:08:10.902 "ffdhe2048", 00:08:10.902 "ffdhe3072", 00:08:10.902 "ffdhe4096", 00:08:10.902 "ffdhe6144", 00:08:10.902 "ffdhe8192" 00:08:10.902 ] 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "bdev_nvme_set_hotplug", 00:08:10.902 "params": { 00:08:10.902 "period_us": 100000, 00:08:10.902 "enable": false 00:08:10.902 } 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "method": "bdev_wait_for_examine" 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "scsi", 00:08:10.902 "config": null 00:08:10.902 }, 00:08:10.902 { 00:08:10.902 "subsystem": "scheduler", 00:08:10.902 "config": [ 00:08:10.902 { 00:08:10.902 "method": "framework_set_scheduler", 00:08:10.902 "params": { 00:08:10.902 "name": "static" 00:08:10.902 } 00:08:10.903 } 00:08:10.903 ] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "vhost_scsi", 00:08:10.903 "config": [] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "vhost_blk", 00:08:10.903 "config": [] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "ublk", 00:08:10.903 "config": [] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "nbd", 00:08:10.903 "config": [] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "nvmf", 00:08:10.903 "config": [ 00:08:10.903 { 00:08:10.903 "method": "nvmf_set_config", 00:08:10.903 "params": { 00:08:10.903 "discovery_filter": "match_any", 00:08:10.903 "admin_cmd_passthru": { 00:08:10.903 "identify_ctrlr": false 00:08:10.903 }, 00:08:10.903 "dhchap_digests": [ 00:08:10.903 "sha256", 00:08:10.903 "sha384", 00:08:10.903 "sha512" 00:08:10.903 ], 00:08:10.903 "dhchap_dhgroups": [ 00:08:10.903 "null", 00:08:10.903 "ffdhe2048", 00:08:10.903 "ffdhe3072", 00:08:10.903 "ffdhe4096", 00:08:10.903 "ffdhe6144", 00:08:10.903 "ffdhe8192" 00:08:10.903 ] 00:08:10.903 } 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "method": "nvmf_set_max_subsystems", 00:08:10.903 "params": { 00:08:10.903 "max_subsystems": 1024 00:08:10.903 } 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "method": "nvmf_set_crdt", 00:08:10.903 "params": { 00:08:10.903 "crdt1": 0, 00:08:10.903 "crdt2": 0, 00:08:10.903 "crdt3": 0 00:08:10.903 } 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "method": "nvmf_create_transport", 00:08:10.903 "params": { 00:08:10.903 "trtype": "TCP", 00:08:10.903 "max_queue_depth": 128, 00:08:10.903 "max_io_qpairs_per_ctrlr": 127, 00:08:10.903 "in_capsule_data_size": 4096, 00:08:10.903 "max_io_size": 131072, 00:08:10.903 "io_unit_size": 131072, 00:08:10.903 "max_aq_depth": 128, 00:08:10.903 "num_shared_buffers": 511, 00:08:10.903 "buf_cache_size": 4294967295, 00:08:10.903 "dif_insert_or_strip": false, 00:08:10.903 "zcopy": false, 00:08:10.903 "c2h_success": true, 00:08:10.903 "sock_priority": 0, 00:08:10.903 "abort_timeout_sec": 1, 00:08:10.903 "ack_timeout": 0, 00:08:10.903 "data_wr_pool_size": 0 00:08:10.903 } 00:08:10.903 } 00:08:10.903 ] 00:08:10.903 }, 00:08:10.903 { 00:08:10.903 "subsystem": "iscsi", 00:08:10.903 "config": [ 00:08:10.903 { 00:08:10.903 "method": "iscsi_set_options", 00:08:10.903 "params": { 00:08:10.903 "node_base": "iqn.2016-06.io.spdk", 00:08:10.903 "max_sessions": 128, 00:08:10.903 "max_connections_per_session": 2, 00:08:10.903 "max_queue_depth": 64, 00:08:10.903 "default_time2wait": 2, 00:08:10.903 "default_time2retain": 20, 00:08:10.903 "first_burst_length": 8192, 00:08:10.903 "immediate_data": true, 00:08:10.903 "allow_duplicated_isid": false, 00:08:10.903 "error_recovery_level": 0, 00:08:10.903 "nop_timeout": 60, 00:08:10.903 "nop_in_interval": 30, 00:08:10.903 "disable_chap": false, 00:08:10.903 "require_chap": false, 00:08:10.903 "mutual_chap": false, 00:08:10.903 "chap_group": 0, 00:08:10.903 "max_large_datain_per_connection": 64, 00:08:10.903 "max_r2t_per_connection": 4, 00:08:10.903 "pdu_pool_size": 36864, 00:08:10.903 "immediate_data_pool_size": 16384, 00:08:10.903 "data_out_pool_size": 2048 00:08:10.903 } 00:08:10.903 } 00:08:10.903 ] 00:08:10.903 } 00:08:10.903 ] 00:08:10.903 } 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 467859 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 467859 ']' 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 467859 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467859 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467859' 00:08:10.903 killing process with pid 467859 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 467859 00:08:10.903 22:36:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 467859 00:08:11.164 22:36:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=468199 00:08:11.164 22:36:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:11.164 22:36:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 468199 ']' 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 468199' 00:08:16.451 killing process with pid 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 468199 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:16.451 00:08:16.451 real 0m6.595s 00:08:16.451 user 0m6.480s 00:08:16.451 sys 0m0.578s 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:16.451 ************************************ 00:08:16.451 END TEST skip_rpc_with_json 00:08:16.451 ************************************ 00:08:16.451 22:36:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:16.451 22:36:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.451 22:36:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.451 22:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.451 ************************************ 00:08:16.451 START TEST skip_rpc_with_delay 00:08:16.451 ************************************ 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:16.451 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:16.714 [2024-09-30 22:36:43.480596] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:16.714 [2024-09-30 22:36:43.480674] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.714 00:08:16.714 real 0m0.084s 00:08:16.714 user 0m0.060s 00:08:16.714 sys 0m0.024s 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.714 22:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:16.714 ************************************ 00:08:16.714 END TEST skip_rpc_with_delay 00:08:16.714 ************************************ 00:08:16.714 22:36:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:16.714 22:36:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:16.714 22:36:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:16.714 22:36:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.714 22:36:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.714 22:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.714 ************************************ 00:08:16.714 START TEST exit_on_failed_rpc_init 00:08:16.714 ************************************ 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=469271 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 469271 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 469271 ']' 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.714 22:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:16.714 [2024-09-30 22:36:43.637618] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:16.714 [2024-09-30 22:36:43.637679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469271 ] 00:08:16.714 [2024-09-30 22:36:43.720477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.975 [2024-09-30 22:36:43.783029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:17.545 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:17.545 [2024-09-30 22:36:44.495288] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:17.545 [2024-09-30 22:36:44.495342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469605 ] 00:08:17.806 [2024-09-30 22:36:44.574038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.806 [2024-09-30 22:36:44.638270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.806 [2024-09-30 22:36:44.638327] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:17.806 [2024-09-30 22:36:44.638337] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:17.806 [2024-09-30 22:36:44.638344] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 469271 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 469271 ']' 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 469271 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469271 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469271' 00:08:17.806 killing process with pid 469271 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 469271 00:08:17.806 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 469271 00:08:18.068 00:08:18.068 real 0m1.386s 00:08:18.068 user 0m1.636s 00:08:18.068 sys 0m0.408s 00:08:18.068 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.068 22:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 ************************************ 00:08:18.068 END TEST exit_on_failed_rpc_init 00:08:18.068 ************************************ 00:08:18.068 22:36:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:18.068 00:08:18.068 real 0m13.868s 00:08:18.068 user 0m13.405s 00:08:18.068 sys 0m1.654s 00:08:18.068 22:36:45 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.068 22:36:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 ************************************ 00:08:18.068 END TEST skip_rpc 00:08:18.068 ************************************ 00:08:18.068 22:36:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:18.068 22:36:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.068 22:36:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.068 22:36:45 -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 ************************************ 00:08:18.068 START TEST rpc_client 00:08:18.068 ************************************ 00:08:18.068 22:36:45 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:18.329 * Looking for test storage... 00:08:18.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.329 22:36:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.329 --rc genhtml_branch_coverage=1 00:08:18.329 --rc genhtml_function_coverage=1 00:08:18.329 --rc genhtml_legend=1 00:08:18.329 --rc geninfo_all_blocks=1 00:08:18.329 --rc geninfo_unexecuted_blocks=1 00:08:18.329 00:08:18.329 ' 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.329 --rc genhtml_branch_coverage=1 00:08:18.329 --rc genhtml_function_coverage=1 00:08:18.329 --rc genhtml_legend=1 00:08:18.329 --rc geninfo_all_blocks=1 00:08:18.329 --rc geninfo_unexecuted_blocks=1 00:08:18.329 00:08:18.329 ' 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.329 --rc genhtml_branch_coverage=1 00:08:18.329 --rc genhtml_function_coverage=1 00:08:18.329 --rc genhtml_legend=1 00:08:18.329 --rc geninfo_all_blocks=1 00:08:18.329 --rc geninfo_unexecuted_blocks=1 00:08:18.329 00:08:18.329 ' 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.329 --rc genhtml_branch_coverage=1 00:08:18.329 --rc genhtml_function_coverage=1 00:08:18.329 --rc genhtml_legend=1 00:08:18.329 --rc geninfo_all_blocks=1 00:08:18.329 --rc geninfo_unexecuted_blocks=1 00:08:18.329 00:08:18.329 ' 00:08:18.329 22:36:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:18.329 OK 00:08:18.329 22:36:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:18.329 00:08:18.329 real 0m0.223s 00:08:18.329 user 0m0.127s 00:08:18.329 sys 0m0.108s 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.329 22:36:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:18.329 ************************************ 00:08:18.329 END TEST rpc_client 00:08:18.329 ************************************ 00:08:18.329 22:36:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:18.329 22:36:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.329 22:36:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.329 22:36:45 -- common/autotest_common.sh@10 -- # set +x 00:08:18.591 ************************************ 00:08:18.591 START TEST json_config 00:08:18.591 ************************************ 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.591 22:36:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.591 22:36:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.591 22:36:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.591 22:36:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.591 22:36:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.591 22:36:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:18.591 22:36:45 json_config -- scripts/common.sh@345 -- # : 1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.591 22:36:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.591 22:36:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@353 -- # local d=1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.591 22:36:45 json_config -- scripts/common.sh@355 -- # echo 1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.591 22:36:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@353 -- # local d=2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.591 22:36:45 json_config -- scripts/common.sh@355 -- # echo 2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.591 22:36:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.591 22:36:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.591 22:36:45 json_config -- scripts/common.sh@368 -- # return 0 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 22:36:45 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 22:36:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.591 22:36:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.591 22:36:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.591 22:36:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.591 22:36:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.591 22:36:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.591 22:36:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 22:36:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 22:36:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 22:36:45 json_config -- paths/export.sh@5 -- # export PATH 00:08:18.592 22:36:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@51 -- # : 0 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.592 22:36:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:18.592 INFO: JSON configuration test init 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.592 22:36:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:18.592 22:36:45 json_config -- json_config/common.sh@9 -- # local app=target 00:08:18.592 22:36:45 json_config -- json_config/common.sh@10 -- # shift 00:08:18.592 22:36:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:18.592 22:36:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:18.592 22:36:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:18.592 22:36:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.592 22:36:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.592 22:36:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=469855 00:08:18.592 22:36:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:18.592 Waiting for target to run... 00:08:18.592 22:36:45 json_config -- json_config/common.sh@25 -- # waitforlisten 469855 /var/tmp/spdk_tgt.sock 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@831 -- # '[' -z 469855 ']' 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:18.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:18.592 22:36:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.592 22:36:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.852 [2024-09-30 22:36:45.663476] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:18.852 [2024-09-30 22:36:45.663553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469855 ] 00:08:19.113 [2024-09-30 22:36:45.975931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.113 [2024-09-30 22:36:46.018749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:19.684 22:36:46 json_config -- json_config/common.sh@26 -- # echo '' 00:08:19.684 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.684 22:36:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:19.684 22:36:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:19.684 22:36:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:20.254 22:36:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.254 22:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:20.254 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@54 -- # sort 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:20.254 22:36:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:20.254 22:36:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.254 22:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:20.515 22:36:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.515 22:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:20.515 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:20.515 MallocForNvmf0 00:08:20.515 22:36:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:20.515 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:20.777 MallocForNvmf1 00:08:20.777 22:36:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:20.777 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:20.777 [2024-09-30 22:36:47.769961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.777 22:36:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:20.777 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:21.037 22:36:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:21.037 22:36:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:21.297 22:36:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:21.297 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:21.297 22:36:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:21.297 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:21.557 [2024-09-30 22:36:48.452055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:21.557 22:36:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:21.557 22:36:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.557 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.557 22:36:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:21.557 22:36:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.557 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.557 22:36:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:21.557 22:36:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:21.557 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:21.818 MallocBdevForConfigChangeCheck 00:08:21.818 22:36:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:21.818 22:36:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.818 22:36:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 22:36:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:21.818 22:36:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.079 22:36:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:22.079 INFO: shutting down applications... 00:08:22.079 22:36:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:22.079 22:36:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:22.079 22:36:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:22.079 22:36:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:22.650 Calling clear_iscsi_subsystem 00:08:22.650 Calling clear_nvmf_subsystem 00:08:22.650 Calling clear_nbd_subsystem 00:08:22.650 Calling clear_ublk_subsystem 00:08:22.650 Calling clear_vhost_blk_subsystem 00:08:22.650 Calling clear_vhost_scsi_subsystem 00:08:22.650 Calling clear_bdev_subsystem 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:22.650 22:36:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:22.912 22:36:49 json_config -- json_config/json_config.sh@352 -- # break 00:08:22.912 22:36:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:22.912 22:36:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:22.912 22:36:49 json_config -- json_config/common.sh@31 -- # local app=target 00:08:22.912 22:36:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:22.912 22:36:49 json_config -- json_config/common.sh@35 -- # [[ -n 469855 ]] 00:08:22.912 22:36:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 469855 00:08:22.912 22:36:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:22.912 22:36:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:22.912 22:36:49 json_config -- json_config/common.sh@41 -- # kill -0 469855 00:08:22.912 22:36:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:23.483 22:36:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:23.483 22:36:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.483 22:36:50 json_config -- json_config/common.sh@41 -- # kill -0 469855 00:08:23.483 22:36:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:23.483 22:36:50 json_config -- json_config/common.sh@43 -- # break 00:08:23.483 22:36:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:23.483 22:36:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:23.483 SPDK target shutdown done 00:08:23.483 22:36:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:23.483 INFO: relaunching applications... 00:08:23.483 22:36:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:23.483 22:36:50 json_config -- json_config/common.sh@9 -- # local app=target 00:08:23.483 22:36:50 json_config -- json_config/common.sh@10 -- # shift 00:08:23.483 22:36:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:23.483 22:36:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:23.483 22:36:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:23.483 22:36:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:23.483 22:36:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:23.483 22:36:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=470876 00:08:23.483 22:36:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:23.483 Waiting for target to run... 00:08:23.483 22:36:50 json_config -- json_config/common.sh@25 -- # waitforlisten 470876 /var/tmp/spdk_tgt.sock 00:08:23.484 22:36:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@831 -- # '[' -z 470876 ']' 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:23.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.484 22:36:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:23.484 [2024-09-30 22:36:50.414043] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:23.484 [2024-09-30 22:36:50.414109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470876 ] 00:08:23.744 [2024-09-30 22:36:50.730550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.005 [2024-09-30 22:36:50.784128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.576 [2024-09-30 22:36:51.284580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.576 [2024-09-30 22:36:51.316993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:24.576 22:36:51 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.576 22:36:51 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:24.576 22:36:51 json_config -- json_config/common.sh@26 -- # echo '' 00:08:24.576 00:08:24.576 22:36:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:24.576 22:36:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:24.576 INFO: Checking if target configuration is the same... 00:08:24.576 22:36:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:24.576 22:36:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:24.576 22:36:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:24.576 + '[' 2 -ne 2 ']' 00:08:24.576 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:24.576 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:24.576 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:24.576 +++ basename /dev/fd/62 00:08:24.576 ++ mktemp /tmp/62.XXX 00:08:24.576 + tmp_file_1=/tmp/62.Wln 00:08:24.576 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:24.576 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:24.576 + tmp_file_2=/tmp/spdk_tgt_config.json.Uxb 00:08:24.576 + ret=0 00:08:24.576 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:24.836 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:24.837 + diff -u /tmp/62.Wln /tmp/spdk_tgt_config.json.Uxb 00:08:24.837 + echo 'INFO: JSON config files are the same' 00:08:24.837 INFO: JSON config files are the same 00:08:24.837 + rm /tmp/62.Wln /tmp/spdk_tgt_config.json.Uxb 00:08:24.837 + exit 0 00:08:24.837 22:36:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:24.837 22:36:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:24.837 INFO: changing configuration and checking if this can be detected... 00:08:24.837 22:36:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:24.837 22:36:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:25.098 22:36:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:25.098 22:36:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:25.098 22:36:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:25.098 + '[' 2 -ne 2 ']' 00:08:25.098 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:25.098 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:25.098 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:25.098 +++ basename /dev/fd/62 00:08:25.098 ++ mktemp /tmp/62.XXX 00:08:25.098 + tmp_file_1=/tmp/62.L9q 00:08:25.098 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:25.098 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:25.098 + tmp_file_2=/tmp/spdk_tgt_config.json.2c9 00:08:25.098 + ret=0 00:08:25.098 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:25.358 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:25.358 + diff -u /tmp/62.L9q /tmp/spdk_tgt_config.json.2c9 00:08:25.358 + ret=1 00:08:25.358 + echo '=== Start of file: /tmp/62.L9q ===' 00:08:25.358 + cat /tmp/62.L9q 00:08:25.358 + echo '=== End of file: /tmp/62.L9q ===' 00:08:25.358 + echo '' 00:08:25.358 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2c9 ===' 00:08:25.358 + cat /tmp/spdk_tgt_config.json.2c9 00:08:25.358 + echo '=== End of file: /tmp/spdk_tgt_config.json.2c9 ===' 00:08:25.358 + echo '' 00:08:25.358 + rm /tmp/62.L9q /tmp/spdk_tgt_config.json.2c9 00:08:25.358 + exit 1 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:25.358 INFO: configuration change detected. 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 470876 ]] 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.358 22:36:52 json_config -- json_config/json_config.sh@330 -- # killprocess 470876 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@950 -- # '[' -z 470876 ']' 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@954 -- # kill -0 470876 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@955 -- # uname 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.358 22:36:52 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470876 00:08:25.618 22:36:52 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.618 22:36:52 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.618 22:36:52 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470876' 00:08:25.618 killing process with pid 470876 00:08:25.618 22:36:52 json_config -- common/autotest_common.sh@969 -- # kill 470876 00:08:25.618 22:36:52 json_config -- common/autotest_common.sh@974 -- # wait 470876 00:08:25.879 22:36:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:25.879 22:36:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:25.879 22:36:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.879 22:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 22:36:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:25.879 22:36:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:25.879 INFO: Success 00:08:25.879 00:08:25.879 real 0m7.358s 00:08:25.879 user 0m8.829s 00:08:25.879 sys 0m1.980s 00:08:25.879 22:36:52 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.879 22:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 ************************************ 00:08:25.879 END TEST json_config 00:08:25.879 ************************************ 00:08:25.879 22:36:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:25.879 22:36:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.879 22:36:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.879 22:36:52 -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 ************************************ 00:08:25.879 START TEST json_config_extra_key 00:08:25.879 ************************************ 00:08:25.879 22:36:52 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:25.879 22:36:52 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:25.879 22:36:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:08:25.879 22:36:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.140 22:36:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:26.140 22:36:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.141 22:36:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:26.141 22:36:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.141 22:36:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.141 22:36:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.141 22:36:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:26.141 22:36:52 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.141 22:36:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.141 --rc genhtml_branch_coverage=1 00:08:26.141 --rc genhtml_function_coverage=1 00:08:26.141 --rc genhtml_legend=1 00:08:26.141 --rc geninfo_all_blocks=1 00:08:26.141 --rc geninfo_unexecuted_blocks=1 00:08:26.141 00:08:26.141 ' 00:08:26.141 22:36:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.141 --rc genhtml_branch_coverage=1 00:08:26.141 --rc genhtml_function_coverage=1 00:08:26.141 --rc genhtml_legend=1 00:08:26.141 --rc geninfo_all_blocks=1 00:08:26.141 --rc geninfo_unexecuted_blocks=1 00:08:26.141 00:08:26.141 ' 00:08:26.141 22:36:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.141 --rc genhtml_branch_coverage=1 00:08:26.141 --rc genhtml_function_coverage=1 00:08:26.141 --rc genhtml_legend=1 00:08:26.141 --rc geninfo_all_blocks=1 00:08:26.141 --rc geninfo_unexecuted_blocks=1 00:08:26.141 00:08:26.141 ' 00:08:26.141 22:36:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.141 --rc genhtml_branch_coverage=1 00:08:26.141 --rc genhtml_function_coverage=1 00:08:26.141 --rc genhtml_legend=1 00:08:26.141 --rc geninfo_all_blocks=1 00:08:26.141 --rc geninfo_unexecuted_blocks=1 00:08:26.141 00:08:26.141 ' 00:08:26.141 22:36:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.141 22:36:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.141 22:36:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.141 22:36:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.141 22:36:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.141 22:36:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.141 22:36:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.141 22:36:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.141 22:36:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.141 22:36:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:26.141 22:36:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.141 22:36:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:26.141 INFO: launching applications... 00:08:26.141 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=471665 00:08:26.141 22:36:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:26.141 Waiting for target to run... 00:08:26.142 22:36:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 471665 /var/tmp/spdk_tgt.sock 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 471665 ']' 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:26.142 22:36:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:26.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.142 22:36:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:26.142 [2024-09-30 22:36:53.083448] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:26.142 [2024-09-30 22:36:53.083524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471665 ] 00:08:26.402 [2024-09-30 22:36:53.410358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.663 [2024-09-30 22:36:53.462527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.924 22:36:53 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.924 22:36:53 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:26.924 00:08:26.924 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:26.924 INFO: shutting down applications... 00:08:26.924 22:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 471665 ]] 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 471665 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 471665 00:08:26.924 22:36:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 471665 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:27.494 22:36:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:27.494 SPDK target shutdown done 00:08:27.494 22:36:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:27.494 Success 00:08:27.494 00:08:27.494 real 0m1.564s 00:08:27.494 user 0m1.130s 00:08:27.494 sys 0m0.464s 00:08:27.494 22:36:54 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.494 22:36:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 ************************************ 00:08:27.494 END TEST json_config_extra_key 00:08:27.494 ************************************ 00:08:27.494 22:36:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.494 22:36:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.494 22:36:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.494 22:36:54 -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 ************************************ 00:08:27.494 START TEST alias_rpc 00:08:27.494 ************************************ 00:08:27.494 22:36:54 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.755 * Looking for test storage... 00:08:27.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.755 22:36:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.755 --rc genhtml_branch_coverage=1 00:08:27.755 --rc genhtml_function_coverage=1 00:08:27.755 --rc genhtml_legend=1 00:08:27.755 --rc geninfo_all_blocks=1 00:08:27.755 --rc geninfo_unexecuted_blocks=1 00:08:27.755 00:08:27.755 ' 00:08:27.755 22:36:54 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.756 --rc genhtml_branch_coverage=1 00:08:27.756 --rc genhtml_function_coverage=1 00:08:27.756 --rc genhtml_legend=1 00:08:27.756 --rc geninfo_all_blocks=1 00:08:27.756 --rc geninfo_unexecuted_blocks=1 00:08:27.756 00:08:27.756 ' 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.756 --rc genhtml_branch_coverage=1 00:08:27.756 --rc genhtml_function_coverage=1 00:08:27.756 --rc genhtml_legend=1 00:08:27.756 --rc geninfo_all_blocks=1 00:08:27.756 --rc geninfo_unexecuted_blocks=1 00:08:27.756 00:08:27.756 ' 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.756 --rc genhtml_branch_coverage=1 00:08:27.756 --rc genhtml_function_coverage=1 00:08:27.756 --rc genhtml_legend=1 00:08:27.756 --rc geninfo_all_blocks=1 00:08:27.756 --rc geninfo_unexecuted_blocks=1 00:08:27.756 00:08:27.756 ' 00:08:27.756 22:36:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:27.756 22:36:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=472057 00:08:27.756 22:36:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 472057 00:08:27.756 22:36:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 472057 ']' 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.756 22:36:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.756 [2024-09-30 22:36:54.719514] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:27.756 [2024-09-30 22:36:54.719566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472057 ] 00:08:28.015 [2024-09-30 22:36:54.796996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.015 [2024-09-30 22:36:54.853489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.585 22:36:55 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.585 22:36:55 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:28.585 22:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:28.844 22:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 472057 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 472057 ']' 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 472057 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472057 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472057' 00:08:28.844 killing process with pid 472057 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@969 -- # kill 472057 00:08:28.844 22:36:55 alias_rpc -- common/autotest_common.sh@974 -- # wait 472057 00:08:29.104 00:08:29.104 real 0m1.521s 00:08:29.104 user 0m1.693s 00:08:29.104 sys 0m0.411s 00:08:29.104 22:36:55 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.104 22:36:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.104 ************************************ 00:08:29.104 END TEST alias_rpc 00:08:29.104 ************************************ 00:08:29.104 22:36:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:29.104 22:36:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:29.104 22:36:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.104 22:36:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.104 22:36:56 -- common/autotest_common.sh@10 -- # set +x 00:08:29.104 ************************************ 00:08:29.104 START TEST spdkcli_tcp 00:08:29.104 ************************************ 00:08:29.104 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:29.364 * Looking for test storage... 00:08:29.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.364 22:36:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.364 --rc genhtml_branch_coverage=1 00:08:29.364 --rc genhtml_function_coverage=1 00:08:29.364 --rc genhtml_legend=1 00:08:29.364 --rc geninfo_all_blocks=1 00:08:29.364 --rc geninfo_unexecuted_blocks=1 00:08:29.364 00:08:29.364 ' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.364 --rc genhtml_branch_coverage=1 00:08:29.364 --rc genhtml_function_coverage=1 00:08:29.364 --rc genhtml_legend=1 00:08:29.364 --rc geninfo_all_blocks=1 00:08:29.364 --rc geninfo_unexecuted_blocks=1 00:08:29.364 00:08:29.364 ' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.364 --rc genhtml_branch_coverage=1 00:08:29.364 --rc genhtml_function_coverage=1 00:08:29.364 --rc genhtml_legend=1 00:08:29.364 --rc geninfo_all_blocks=1 00:08:29.364 --rc geninfo_unexecuted_blocks=1 00:08:29.364 00:08:29.364 ' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.364 --rc genhtml_branch_coverage=1 00:08:29.364 --rc genhtml_function_coverage=1 00:08:29.364 --rc genhtml_legend=1 00:08:29.364 --rc geninfo_all_blocks=1 00:08:29.364 --rc geninfo_unexecuted_blocks=1 00:08:29.364 00:08:29.364 ' 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=472453 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 472453 00:08:29.364 22:36:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 472453 ']' 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.364 22:36:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.365 22:36:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.365 22:36:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.365 [2024-09-30 22:36:56.338841] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:29.365 [2024-09-30 22:36:56.338919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472453 ] 00:08:29.624 [2024-09-30 22:36:56.421276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:29.624 [2024-09-30 22:36:56.483532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.624 [2024-09-30 22:36:56.483533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.193 22:36:57 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.193 22:36:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:30.193 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:30.193 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=472478 00:08:30.193 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:30.454 [ 00:08:30.454 "bdev_malloc_delete", 00:08:30.454 "bdev_malloc_create", 00:08:30.454 "bdev_null_resize", 00:08:30.454 "bdev_null_delete", 00:08:30.454 "bdev_null_create", 00:08:30.454 "bdev_nvme_cuse_unregister", 00:08:30.454 "bdev_nvme_cuse_register", 00:08:30.454 "bdev_opal_new_user", 00:08:30.454 "bdev_opal_set_lock_state", 00:08:30.454 "bdev_opal_delete", 00:08:30.454 "bdev_opal_get_info", 00:08:30.454 "bdev_opal_create", 00:08:30.454 "bdev_nvme_opal_revert", 00:08:30.454 "bdev_nvme_opal_init", 00:08:30.454 "bdev_nvme_send_cmd", 00:08:30.454 "bdev_nvme_set_keys", 00:08:30.454 "bdev_nvme_get_path_iostat", 00:08:30.454 "bdev_nvme_get_mdns_discovery_info", 00:08:30.454 "bdev_nvme_stop_mdns_discovery", 00:08:30.454 "bdev_nvme_start_mdns_discovery", 00:08:30.454 "bdev_nvme_set_multipath_policy", 00:08:30.454 "bdev_nvme_set_preferred_path", 00:08:30.454 "bdev_nvme_get_io_paths", 00:08:30.454 "bdev_nvme_remove_error_injection", 00:08:30.454 "bdev_nvme_add_error_injection", 00:08:30.454 "bdev_nvme_get_discovery_info", 00:08:30.454 "bdev_nvme_stop_discovery", 00:08:30.454 "bdev_nvme_start_discovery", 00:08:30.454 "bdev_nvme_get_controller_health_info", 00:08:30.454 "bdev_nvme_disable_controller", 00:08:30.454 "bdev_nvme_enable_controller", 00:08:30.454 "bdev_nvme_reset_controller", 00:08:30.454 "bdev_nvme_get_transport_statistics", 00:08:30.454 "bdev_nvme_apply_firmware", 00:08:30.454 "bdev_nvme_detach_controller", 00:08:30.454 "bdev_nvme_get_controllers", 00:08:30.454 "bdev_nvme_attach_controller", 00:08:30.454 "bdev_nvme_set_hotplug", 00:08:30.454 "bdev_nvme_set_options", 00:08:30.454 "bdev_passthru_delete", 00:08:30.454 "bdev_passthru_create", 00:08:30.454 "bdev_lvol_set_parent_bdev", 00:08:30.454 "bdev_lvol_set_parent", 00:08:30.454 "bdev_lvol_check_shallow_copy", 00:08:30.454 "bdev_lvol_start_shallow_copy", 00:08:30.454 "bdev_lvol_grow_lvstore", 00:08:30.454 "bdev_lvol_get_lvols", 00:08:30.454 "bdev_lvol_get_lvstores", 00:08:30.454 "bdev_lvol_delete", 00:08:30.454 "bdev_lvol_set_read_only", 00:08:30.454 "bdev_lvol_resize", 00:08:30.454 "bdev_lvol_decouple_parent", 00:08:30.454 "bdev_lvol_inflate", 00:08:30.454 "bdev_lvol_rename", 00:08:30.454 "bdev_lvol_clone_bdev", 00:08:30.454 "bdev_lvol_clone", 00:08:30.454 "bdev_lvol_snapshot", 00:08:30.454 "bdev_lvol_create", 00:08:30.454 "bdev_lvol_delete_lvstore", 00:08:30.454 "bdev_lvol_rename_lvstore", 00:08:30.454 "bdev_lvol_create_lvstore", 00:08:30.454 "bdev_raid_set_options", 00:08:30.454 "bdev_raid_remove_base_bdev", 00:08:30.454 "bdev_raid_add_base_bdev", 00:08:30.454 "bdev_raid_delete", 00:08:30.454 "bdev_raid_create", 00:08:30.454 "bdev_raid_get_bdevs", 00:08:30.454 "bdev_error_inject_error", 00:08:30.454 "bdev_error_delete", 00:08:30.454 "bdev_error_create", 00:08:30.454 "bdev_split_delete", 00:08:30.454 "bdev_split_create", 00:08:30.454 "bdev_delay_delete", 00:08:30.454 "bdev_delay_create", 00:08:30.454 "bdev_delay_update_latency", 00:08:30.454 "bdev_zone_block_delete", 00:08:30.454 "bdev_zone_block_create", 00:08:30.454 "blobfs_create", 00:08:30.454 "blobfs_detect", 00:08:30.454 "blobfs_set_cache_size", 00:08:30.454 "bdev_aio_delete", 00:08:30.454 "bdev_aio_rescan", 00:08:30.454 "bdev_aio_create", 00:08:30.454 "bdev_ftl_set_property", 00:08:30.454 "bdev_ftl_get_properties", 00:08:30.455 "bdev_ftl_get_stats", 00:08:30.455 "bdev_ftl_unmap", 00:08:30.455 "bdev_ftl_unload", 00:08:30.455 "bdev_ftl_delete", 00:08:30.455 "bdev_ftl_load", 00:08:30.455 "bdev_ftl_create", 00:08:30.455 "bdev_virtio_attach_controller", 00:08:30.455 "bdev_virtio_scsi_get_devices", 00:08:30.455 "bdev_virtio_detach_controller", 00:08:30.455 "bdev_virtio_blk_set_hotplug", 00:08:30.455 "bdev_iscsi_delete", 00:08:30.455 "bdev_iscsi_create", 00:08:30.455 "bdev_iscsi_set_options", 00:08:30.455 "accel_error_inject_error", 00:08:30.455 "ioat_scan_accel_module", 00:08:30.455 "dsa_scan_accel_module", 00:08:30.455 "iaa_scan_accel_module", 00:08:30.455 "vfu_virtio_create_fs_endpoint", 00:08:30.455 "vfu_virtio_create_scsi_endpoint", 00:08:30.455 "vfu_virtio_scsi_remove_target", 00:08:30.455 "vfu_virtio_scsi_add_target", 00:08:30.455 "vfu_virtio_create_blk_endpoint", 00:08:30.455 "vfu_virtio_delete_endpoint", 00:08:30.455 "keyring_file_remove_key", 00:08:30.455 "keyring_file_add_key", 00:08:30.455 "keyring_linux_set_options", 00:08:30.455 "fsdev_aio_delete", 00:08:30.455 "fsdev_aio_create", 00:08:30.455 "iscsi_get_histogram", 00:08:30.455 "iscsi_enable_histogram", 00:08:30.455 "iscsi_set_options", 00:08:30.455 "iscsi_get_auth_groups", 00:08:30.455 "iscsi_auth_group_remove_secret", 00:08:30.455 "iscsi_auth_group_add_secret", 00:08:30.455 "iscsi_delete_auth_group", 00:08:30.455 "iscsi_create_auth_group", 00:08:30.455 "iscsi_set_discovery_auth", 00:08:30.455 "iscsi_get_options", 00:08:30.455 "iscsi_target_node_request_logout", 00:08:30.455 "iscsi_target_node_set_redirect", 00:08:30.455 "iscsi_target_node_set_auth", 00:08:30.455 "iscsi_target_node_add_lun", 00:08:30.455 "iscsi_get_stats", 00:08:30.455 "iscsi_get_connections", 00:08:30.455 "iscsi_portal_group_set_auth", 00:08:30.455 "iscsi_start_portal_group", 00:08:30.455 "iscsi_delete_portal_group", 00:08:30.455 "iscsi_create_portal_group", 00:08:30.455 "iscsi_get_portal_groups", 00:08:30.455 "iscsi_delete_target_node", 00:08:30.455 "iscsi_target_node_remove_pg_ig_maps", 00:08:30.455 "iscsi_target_node_add_pg_ig_maps", 00:08:30.455 "iscsi_create_target_node", 00:08:30.455 "iscsi_get_target_nodes", 00:08:30.455 "iscsi_delete_initiator_group", 00:08:30.455 "iscsi_initiator_group_remove_initiators", 00:08:30.455 "iscsi_initiator_group_add_initiators", 00:08:30.455 "iscsi_create_initiator_group", 00:08:30.455 "iscsi_get_initiator_groups", 00:08:30.455 "nvmf_set_crdt", 00:08:30.455 "nvmf_set_config", 00:08:30.455 "nvmf_set_max_subsystems", 00:08:30.455 "nvmf_stop_mdns_prr", 00:08:30.455 "nvmf_publish_mdns_prr", 00:08:30.455 "nvmf_subsystem_get_listeners", 00:08:30.455 "nvmf_subsystem_get_qpairs", 00:08:30.455 "nvmf_subsystem_get_controllers", 00:08:30.455 "nvmf_get_stats", 00:08:30.455 "nvmf_get_transports", 00:08:30.455 "nvmf_create_transport", 00:08:30.455 "nvmf_get_targets", 00:08:30.455 "nvmf_delete_target", 00:08:30.455 "nvmf_create_target", 00:08:30.455 "nvmf_subsystem_allow_any_host", 00:08:30.455 "nvmf_subsystem_set_keys", 00:08:30.455 "nvmf_subsystem_remove_host", 00:08:30.455 "nvmf_subsystem_add_host", 00:08:30.455 "nvmf_ns_remove_host", 00:08:30.455 "nvmf_ns_add_host", 00:08:30.455 "nvmf_subsystem_remove_ns", 00:08:30.455 "nvmf_subsystem_set_ns_ana_group", 00:08:30.455 "nvmf_subsystem_add_ns", 00:08:30.455 "nvmf_subsystem_listener_set_ana_state", 00:08:30.455 "nvmf_discovery_get_referrals", 00:08:30.455 "nvmf_discovery_remove_referral", 00:08:30.455 "nvmf_discovery_add_referral", 00:08:30.455 "nvmf_subsystem_remove_listener", 00:08:30.455 "nvmf_subsystem_add_listener", 00:08:30.455 "nvmf_delete_subsystem", 00:08:30.455 "nvmf_create_subsystem", 00:08:30.455 "nvmf_get_subsystems", 00:08:30.455 "env_dpdk_get_mem_stats", 00:08:30.455 "nbd_get_disks", 00:08:30.455 "nbd_stop_disk", 00:08:30.455 "nbd_start_disk", 00:08:30.455 "ublk_recover_disk", 00:08:30.455 "ublk_get_disks", 00:08:30.455 "ublk_stop_disk", 00:08:30.455 "ublk_start_disk", 00:08:30.455 "ublk_destroy_target", 00:08:30.455 "ublk_create_target", 00:08:30.455 "virtio_blk_create_transport", 00:08:30.455 "virtio_blk_get_transports", 00:08:30.455 "vhost_controller_set_coalescing", 00:08:30.455 "vhost_get_controllers", 00:08:30.455 "vhost_delete_controller", 00:08:30.455 "vhost_create_blk_controller", 00:08:30.455 "vhost_scsi_controller_remove_target", 00:08:30.455 "vhost_scsi_controller_add_target", 00:08:30.455 "vhost_start_scsi_controller", 00:08:30.455 "vhost_create_scsi_controller", 00:08:30.455 "thread_set_cpumask", 00:08:30.455 "scheduler_set_options", 00:08:30.455 "framework_get_governor", 00:08:30.455 "framework_get_scheduler", 00:08:30.455 "framework_set_scheduler", 00:08:30.455 "framework_get_reactors", 00:08:30.455 "thread_get_io_channels", 00:08:30.455 "thread_get_pollers", 00:08:30.455 "thread_get_stats", 00:08:30.455 "framework_monitor_context_switch", 00:08:30.455 "spdk_kill_instance", 00:08:30.455 "log_enable_timestamps", 00:08:30.455 "log_get_flags", 00:08:30.455 "log_clear_flag", 00:08:30.455 "log_set_flag", 00:08:30.455 "log_get_level", 00:08:30.455 "log_set_level", 00:08:30.455 "log_get_print_level", 00:08:30.455 "log_set_print_level", 00:08:30.455 "framework_enable_cpumask_locks", 00:08:30.455 "framework_disable_cpumask_locks", 00:08:30.455 "framework_wait_init", 00:08:30.455 "framework_start_init", 00:08:30.455 "scsi_get_devices", 00:08:30.455 "bdev_get_histogram", 00:08:30.455 "bdev_enable_histogram", 00:08:30.455 "bdev_set_qos_limit", 00:08:30.455 "bdev_set_qd_sampling_period", 00:08:30.455 "bdev_get_bdevs", 00:08:30.455 "bdev_reset_iostat", 00:08:30.455 "bdev_get_iostat", 00:08:30.455 "bdev_examine", 00:08:30.455 "bdev_wait_for_examine", 00:08:30.455 "bdev_set_options", 00:08:30.455 "accel_get_stats", 00:08:30.455 "accel_set_options", 00:08:30.455 "accel_set_driver", 00:08:30.455 "accel_crypto_key_destroy", 00:08:30.455 "accel_crypto_keys_get", 00:08:30.455 "accel_crypto_key_create", 00:08:30.455 "accel_assign_opc", 00:08:30.455 "accel_get_module_info", 00:08:30.455 "accel_get_opc_assignments", 00:08:30.455 "vmd_rescan", 00:08:30.455 "vmd_remove_device", 00:08:30.455 "vmd_enable", 00:08:30.455 "sock_get_default_impl", 00:08:30.455 "sock_set_default_impl", 00:08:30.455 "sock_impl_set_options", 00:08:30.455 "sock_impl_get_options", 00:08:30.455 "iobuf_get_stats", 00:08:30.455 "iobuf_set_options", 00:08:30.455 "keyring_get_keys", 00:08:30.455 "vfu_tgt_set_base_path", 00:08:30.455 "framework_get_pci_devices", 00:08:30.455 "framework_get_config", 00:08:30.455 "framework_get_subsystems", 00:08:30.455 "fsdev_set_opts", 00:08:30.455 "fsdev_get_opts", 00:08:30.455 "trace_get_info", 00:08:30.455 "trace_get_tpoint_group_mask", 00:08:30.455 "trace_disable_tpoint_group", 00:08:30.455 "trace_enable_tpoint_group", 00:08:30.455 "trace_clear_tpoint_mask", 00:08:30.455 "trace_set_tpoint_mask", 00:08:30.455 "notify_get_notifications", 00:08:30.455 "notify_get_types", 00:08:30.455 "spdk_get_version", 00:08:30.455 "rpc_get_methods" 00:08:30.455 ] 00:08:30.455 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:30.455 22:36:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 472453 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 472453 ']' 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 472453 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472453 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472453' 00:08:30.455 killing process with pid 472453 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 472453 00:08:30.455 22:36:57 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 472453 00:08:30.717 00:08:30.717 real 0m1.551s 00:08:30.717 user 0m2.778s 00:08:30.717 sys 0m0.474s 00:08:30.717 22:36:57 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.717 22:36:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 ************************************ 00:08:30.717 END TEST spdkcli_tcp 00:08:30.717 ************************************ 00:08:30.717 22:36:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:30.717 22:36:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.717 22:36:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.717 22:36:57 -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 ************************************ 00:08:30.717 START TEST dpdk_mem_utility 00:08:30.717 ************************************ 00:08:30.717 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:30.978 * Looking for test storage... 00:08:30.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.978 22:36:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.978 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.978 --rc genhtml_branch_coverage=1 00:08:30.978 --rc genhtml_function_coverage=1 00:08:30.979 --rc genhtml_legend=1 00:08:30.979 --rc geninfo_all_blocks=1 00:08:30.979 --rc geninfo_unexecuted_blocks=1 00:08:30.979 00:08:30.979 ' 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.979 --rc genhtml_branch_coverage=1 00:08:30.979 --rc genhtml_function_coverage=1 00:08:30.979 --rc genhtml_legend=1 00:08:30.979 --rc geninfo_all_blocks=1 00:08:30.979 --rc geninfo_unexecuted_blocks=1 00:08:30.979 00:08:30.979 ' 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.979 --rc genhtml_branch_coverage=1 00:08:30.979 --rc genhtml_function_coverage=1 00:08:30.979 --rc genhtml_legend=1 00:08:30.979 --rc geninfo_all_blocks=1 00:08:30.979 --rc geninfo_unexecuted_blocks=1 00:08:30.979 00:08:30.979 ' 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.979 --rc genhtml_branch_coverage=1 00:08:30.979 --rc genhtml_function_coverage=1 00:08:30.979 --rc genhtml_legend=1 00:08:30.979 --rc geninfo_all_blocks=1 00:08:30.979 --rc geninfo_unexecuted_blocks=1 00:08:30.979 00:08:30.979 ' 00:08:30.979 22:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:30.979 22:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=472870 00:08:30.979 22:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 472870 00:08:30.979 22:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 472870 ']' 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.979 22:36:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:30.979 [2024-09-30 22:36:57.962064] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:30.979 [2024-09-30 22:36:57.962142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472870 ] 00:08:31.239 [2024-09-30 22:36:58.041250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.239 [2024-09-30 22:36:58.103173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.809 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.809 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:31.809 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:31.809 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:31.809 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.809 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:31.809 { 00:08:31.809 "filename": "/tmp/spdk_mem_dump.txt" 00:08:31.809 } 00:08:31.809 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.809 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:31.809 DPDK memory size 860.000000 MiB in 1 heap(s) 00:08:31.809 1 heaps totaling size 860.000000 MiB 00:08:31.809 size: 860.000000 MiB heap id: 0 00:08:31.809 end heaps---------- 00:08:31.809 9 mempools totaling size 642.649841 MiB 00:08:31.809 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:31.809 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:31.809 size: 92.545471 MiB name: bdev_io_472870 00:08:31.809 size: 51.011292 MiB name: evtpool_472870 00:08:31.809 size: 50.003479 MiB name: msgpool_472870 00:08:31.809 size: 36.509338 MiB name: fsdev_io_472870 00:08:31.809 size: 21.763794 MiB name: PDU_Pool 00:08:31.809 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:31.809 size: 0.026123 MiB name: Session_Pool 00:08:31.809 end mempools------- 00:08:31.809 6 memzones totaling size 4.142822 MiB 00:08:31.809 size: 1.000366 MiB name: RG_ring_0_472870 00:08:31.809 size: 1.000366 MiB name: RG_ring_1_472870 00:08:31.809 size: 1.000366 MiB name: RG_ring_4_472870 00:08:31.809 size: 1.000366 MiB name: RG_ring_5_472870 00:08:31.809 size: 0.125366 MiB name: RG_ring_2_472870 00:08:31.809 size: 0.015991 MiB name: RG_ring_3_472870 00:08:31.809 end memzones------- 00:08:31.809 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:32.070 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:08:32.070 list of free elements. size: 13.984680 MiB 00:08:32.070 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:32.071 element at address: 0x200000800000 with size: 1.996948 MiB 00:08:32.071 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:08:32.071 element at address: 0x20001be00000 with size: 0.999878 MiB 00:08:32.071 element at address: 0x200034a00000 with size: 0.994446 MiB 00:08:32.071 element at address: 0x200009600000 with size: 0.959839 MiB 00:08:32.071 element at address: 0x200015e00000 with size: 0.954285 MiB 00:08:32.071 element at address: 0x20001c000000 with size: 0.936584 MiB 00:08:32.071 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:32.071 element at address: 0x20001d800000 with size: 0.582886 MiB 00:08:32.071 element at address: 0x200003e00000 with size: 0.495605 MiB 00:08:32.071 element at address: 0x20000d800000 with size: 0.490723 MiB 00:08:32.071 element at address: 0x20001c200000 with size: 0.485657 MiB 00:08:32.071 element at address: 0x200007000000 with size: 0.481934 MiB 00:08:32.071 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:08:32.071 element at address: 0x200003a00000 with size: 0.354858 MiB 00:08:32.071 list of standard malloc elements. size: 199.218628 MiB 00:08:32.071 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:08:32.071 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:08:32.071 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:08:32.071 element at address: 0x20001befff80 with size: 1.000122 MiB 00:08:32.071 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:08:32.071 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:32.071 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:08:32.071 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:32.071 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:08:32.071 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003aff880 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20000707b600 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:08:32.071 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20001d895380 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20001d895440 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:08:32.071 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:08:32.071 list of memzone associated elements. size: 646.796692 MiB 00:08:32.071 element at address: 0x20001d895500 with size: 211.416748 MiB 00:08:32.071 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:32.071 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:08:32.071 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:32.071 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:08:32.071 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_472870_0 00:08:32.071 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:32.071 associated memzone info: size: 48.002930 MiB name: MP_evtpool_472870_0 00:08:32.071 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:32.071 associated memzone info: size: 48.002930 MiB name: MP_msgpool_472870_0 00:08:32.071 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:08:32.071 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_472870_0 00:08:32.071 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:08:32.071 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:32.071 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:08:32.071 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:32.071 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:32.071 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_472870 00:08:32.071 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:32.071 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_472870 00:08:32.071 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:32.071 associated memzone info: size: 1.007996 MiB name: MP_evtpool_472870 00:08:32.071 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:08:32.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:32.071 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:08:32.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:32.071 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:08:32.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:32.071 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:08:32.071 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:32.071 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:32.071 associated memzone info: size: 1.000366 MiB name: RG_ring_0_472870 00:08:32.071 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:32.071 associated memzone info: size: 1.000366 MiB name: RG_ring_1_472870 00:08:32.071 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:08:32.071 associated memzone info: size: 1.000366 MiB name: RG_ring_4_472870 00:08:32.071 element at address: 0x200034afe940 with size: 1.000488 MiB 00:08:32.071 associated memzone info: size: 1.000366 MiB name: RG_ring_5_472870 00:08:32.071 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:08:32.071 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_472870 00:08:32.071 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:08:32.071 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_472870 00:08:32.071 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:08:32.071 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:32.071 element at address: 0x20000707b780 with size: 0.500488 MiB 00:08:32.071 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:32.071 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:08:32.071 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:32.071 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:08:32.071 associated memzone info: size: 0.125366 MiB name: RG_ring_2_472870 00:08:32.071 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:08:32.071 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:32.071 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:08:32.071 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:32.071 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:08:32.071 associated memzone info: size: 0.015991 MiB name: RG_ring_3_472870 00:08:32.071 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:08:32.071 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:32.071 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:32.071 associated memzone info: size: 0.000183 MiB name: MP_msgpool_472870 00:08:32.071 element at address: 0x200003aff940 with size: 0.000305 MiB 00:08:32.071 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_472870 00:08:32.071 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:08:32.071 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_472870 00:08:32.071 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:08:32.071 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:32.071 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:32.071 22:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 472870 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 472870 ']' 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 472870 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472870 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472870' 00:08:32.071 killing process with pid 472870 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 472870 00:08:32.071 22:36:58 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 472870 00:08:32.332 00:08:32.332 real 0m1.435s 00:08:32.332 user 0m1.496s 00:08:32.332 sys 0m0.442s 00:08:32.332 22:36:59 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.332 22:36:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:32.332 ************************************ 00:08:32.332 END TEST dpdk_mem_utility 00:08:32.332 ************************************ 00:08:32.332 22:36:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:32.332 22:36:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.332 22:36:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.332 22:36:59 -- common/autotest_common.sh@10 -- # set +x 00:08:32.332 ************************************ 00:08:32.332 START TEST event 00:08:32.332 ************************************ 00:08:32.332 22:36:59 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:32.332 * Looking for test storage... 00:08:32.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:32.332 22:36:59 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.332 22:36:59 event -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.332 22:36:59 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.591 22:36:59 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.591 22:36:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.591 22:36:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.591 22:36:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.591 22:36:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.591 22:36:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.591 22:36:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.591 22:36:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.591 22:36:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.591 22:36:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.591 22:36:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.591 22:36:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.592 22:36:59 event -- scripts/common.sh@344 -- # case "$op" in 00:08:32.592 22:36:59 event -- scripts/common.sh@345 -- # : 1 00:08:32.592 22:36:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.592 22:36:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.592 22:36:59 event -- scripts/common.sh@365 -- # decimal 1 00:08:32.592 22:36:59 event -- scripts/common.sh@353 -- # local d=1 00:08:32.592 22:36:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.592 22:36:59 event -- scripts/common.sh@355 -- # echo 1 00:08:32.592 22:36:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.592 22:36:59 event -- scripts/common.sh@366 -- # decimal 2 00:08:32.592 22:36:59 event -- scripts/common.sh@353 -- # local d=2 00:08:32.592 22:36:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.592 22:36:59 event -- scripts/common.sh@355 -- # echo 2 00:08:32.592 22:36:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.592 22:36:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.592 22:36:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.592 22:36:59 event -- scripts/common.sh@368 -- # return 0 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.592 --rc genhtml_legend=1 00:08:32.592 --rc geninfo_all_blocks=1 00:08:32.592 --rc geninfo_unexecuted_blocks=1 00:08:32.592 00:08:32.592 ' 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.592 --rc genhtml_legend=1 00:08:32.592 --rc geninfo_all_blocks=1 00:08:32.592 --rc geninfo_unexecuted_blocks=1 00:08:32.592 00:08:32.592 ' 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.592 --rc genhtml_legend=1 00:08:32.592 --rc geninfo_all_blocks=1 00:08:32.592 --rc geninfo_unexecuted_blocks=1 00:08:32.592 00:08:32.592 ' 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.592 --rc genhtml_legend=1 00:08:32.592 --rc geninfo_all_blocks=1 00:08:32.592 --rc geninfo_unexecuted_blocks=1 00:08:32.592 00:08:32.592 ' 00:08:32.592 22:36:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:32.592 22:36:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:32.592 22:36:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:32.592 22:36:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.592 22:36:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.592 ************************************ 00:08:32.592 START TEST event_perf 00:08:32.592 ************************************ 00:08:32.592 22:36:59 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:32.592 Running I/O for 1 seconds...[2024-09-30 22:36:59.470069] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:32.592 [2024-09-30 22:36:59.470161] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473243 ] 00:08:32.592 [2024-09-30 22:36:59.554937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.851 [2024-09-30 22:36:59.626380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.851 [2024-09-30 22:36:59.626535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.851 [2024-09-30 22:36:59.626691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.851 Running I/O for 1 seconds...[2024-09-30 22:36:59.626692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.791 00:08:33.791 lcore 0: 185025 00:08:33.791 lcore 1: 185028 00:08:33.791 lcore 2: 185024 00:08:33.791 lcore 3: 185025 00:08:33.791 done. 00:08:33.791 00:08:33.791 real 0m1.225s 00:08:33.791 user 0m4.126s 00:08:33.791 sys 0m0.096s 00:08:33.791 22:37:00 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.791 22:37:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.791 ************************************ 00:08:33.791 END TEST event_perf 00:08:33.791 ************************************ 00:08:33.791 22:37:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:33.791 22:37:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:33.791 22:37:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.791 22:37:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:33.791 ************************************ 00:08:33.791 START TEST event_reactor 00:08:33.791 ************************************ 00:08:33.791 22:37:00 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:33.791 [2024-09-30 22:37:00.774038] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:33.791 [2024-09-30 22:37:00.774144] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473403 ] 00:08:34.115 [2024-09-30 22:37:00.854741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.115 [2024-09-30 22:37:00.921803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.141 test_start 00:08:35.141 oneshot 00:08:35.141 tick 100 00:08:35.141 tick 100 00:08:35.141 tick 250 00:08:35.141 tick 100 00:08:35.141 tick 100 00:08:35.141 tick 100 00:08:35.141 tick 250 00:08:35.141 tick 500 00:08:35.141 tick 100 00:08:35.141 tick 100 00:08:35.141 tick 250 00:08:35.141 tick 100 00:08:35.141 tick 100 00:08:35.141 test_end 00:08:35.141 00:08:35.141 real 0m1.213s 00:08:35.141 user 0m1.120s 00:08:35.141 sys 0m0.089s 00:08:35.141 22:37:01 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.141 22:37:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 ************************************ 00:08:35.141 END TEST event_reactor 00:08:35.141 ************************************ 00:08:35.141 22:37:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:35.141 22:37:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:35.141 22:37:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.141 22:37:02 event -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 ************************************ 00:08:35.141 START TEST event_reactor_perf 00:08:35.141 ************************************ 00:08:35.141 22:37:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:35.141 [2024-09-30 22:37:02.067891] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:35.141 [2024-09-30 22:37:02.067981] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473665 ] 00:08:35.141 [2024-09-30 22:37:02.149425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.429 [2024-09-30 22:37:02.207956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.370 test_start 00:08:36.370 test_end 00:08:36.370 Performance: 536006 events per second 00:08:36.370 00:08:36.370 real 0m1.204s 00:08:36.370 user 0m1.111s 00:08:36.370 sys 0m0.089s 00:08:36.370 22:37:03 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.370 22:37:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 ************************************ 00:08:36.370 END TEST event_reactor_perf 00:08:36.370 ************************************ 00:08:36.370 22:37:03 event -- event/event.sh@49 -- # uname -s 00:08:36.370 22:37:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:36.370 22:37:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:36.370 22:37:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.370 22:37:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.370 22:37:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 ************************************ 00:08:36.370 START TEST event_scheduler 00:08:36.370 ************************************ 00:08:36.370 22:37:03 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:36.631 * Looking for test storage... 00:08:36.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.631 22:37:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.631 --rc genhtml_branch_coverage=1 00:08:36.631 --rc genhtml_function_coverage=1 00:08:36.631 --rc genhtml_legend=1 00:08:36.631 --rc geninfo_all_blocks=1 00:08:36.631 --rc geninfo_unexecuted_blocks=1 00:08:36.631 00:08:36.631 ' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.631 --rc genhtml_branch_coverage=1 00:08:36.631 --rc genhtml_function_coverage=1 00:08:36.631 --rc genhtml_legend=1 00:08:36.631 --rc geninfo_all_blocks=1 00:08:36.631 --rc geninfo_unexecuted_blocks=1 00:08:36.631 00:08:36.631 ' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.631 --rc genhtml_branch_coverage=1 00:08:36.631 --rc genhtml_function_coverage=1 00:08:36.631 --rc genhtml_legend=1 00:08:36.631 --rc geninfo_all_blocks=1 00:08:36.631 --rc geninfo_unexecuted_blocks=1 00:08:36.631 00:08:36.631 ' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.631 --rc genhtml_branch_coverage=1 00:08:36.631 --rc genhtml_function_coverage=1 00:08:36.631 --rc genhtml_legend=1 00:08:36.631 --rc geninfo_all_blocks=1 00:08:36.631 --rc geninfo_unexecuted_blocks=1 00:08:36.631 00:08:36.631 ' 00:08:36.631 22:37:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:36.631 22:37:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=474051 00:08:36.631 22:37:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.631 22:37:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 474051 00:08:36.631 22:37:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 474051 ']' 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.631 22:37:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.631 [2024-09-30 22:37:03.585141] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:36.631 [2024-09-30 22:37:03.585209] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474051 ] 00:08:36.893 [2024-09-30 22:37:03.667949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.893 [2024-09-30 22:37:03.761315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.893 [2024-09-30 22:37:03.761477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.893 [2024-09-30 22:37:03.761635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.893 [2024-09-30 22:37:03.761636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:37.466 22:37:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 [2024-09-30 22:37:04.407976] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:37.466 [2024-09-30 22:37:04.407996] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:37.466 [2024-09-30 22:37:04.408006] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:37.466 [2024-09-30 22:37:04.408013] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:37.466 [2024-09-30 22:37:04.408019] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.466 22:37:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 [2024-09-30 22:37:04.474352] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.466 22:37:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.466 22:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.727 ************************************ 00:08:37.727 START TEST scheduler_create_thread 00:08:37.727 ************************************ 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.727 2 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.727 3 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.727 4 00:08:37.727 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.728 5 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.728 6 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.728 7 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.728 8 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.728 22:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.299 9 00:08:38.299 22:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.299 22:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:38.299 22:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.299 22:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.241 10 00:08:39.241 22:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.241 22:37:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:39.241 22:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.241 22:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.183 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.183 22:37:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:40.183 22:37:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:40.183 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.183 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.755 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.755 22:37:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:40.755 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.755 22:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:41.697 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.697 22:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:41.697 22:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:41.697 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.697 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.269 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.269 00:08:42.269 real 0m4.466s 00:08:42.269 user 0m0.023s 00:08:42.269 sys 0m0.008s 00:08:42.269 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.269 22:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.269 ************************************ 00:08:42.269 END TEST scheduler_create_thread 00:08:42.269 ************************************ 00:08:42.269 22:37:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:42.269 22:37:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 474051 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 474051 ']' 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 474051 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474051 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474051' 00:08:42.269 killing process with pid 474051 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 474051 00:08:42.269 22:37:09 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 474051 00:08:42.269 [2024-09-30 22:37:09.261097] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:42.530 00:08:42.530 real 0m6.087s 00:08:42.530 user 0m14.318s 00:08:42.530 sys 0m0.435s 00:08:42.530 22:37:09 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.530 22:37:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:42.530 ************************************ 00:08:42.530 END TEST event_scheduler 00:08:42.530 ************************************ 00:08:42.530 22:37:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:42.530 22:37:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:42.530 22:37:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.531 22:37:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.531 22:37:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:42.531 ************************************ 00:08:42.531 START TEST app_repeat 00:08:42.531 ************************************ 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=475408 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 475408' 00:08:42.531 Process app_repeat pid: 475408 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:42.531 spdk_app_start Round 0 00:08:42.531 22:37:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 475408 /var/tmp/spdk-nbd.sock 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 475408 ']' 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.531 22:37:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:42.792 [2024-09-30 22:37:09.549353] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:42.792 [2024-09-30 22:37:09.549446] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475408 ] 00:08:42.792 [2024-09-30 22:37:09.638668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.792 [2024-09-30 22:37:09.695409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.792 [2024-09-30 22:37:09.695409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.363 22:37:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.363 22:37:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:43.363 22:37:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.623 Malloc0 00:08:43.623 22:37:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.883 Malloc1 00:08:43.883 22:37:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:43.883 22:37:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.146 /dev/nbd0 00:08:44.146 22:37:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.146 22:37:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.146 1+0 records in 00:08:44.146 1+0 records out 00:08:44.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276627 s, 14.8 MB/s 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.146 22:37:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:44.146 22:37:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.146 22:37:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.146 22:37:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:44.408 /dev/nbd1 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.408 1+0 records in 00:08:44.408 1+0 records out 00:08:44.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279668 s, 14.6 MB/s 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.408 22:37:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.408 22:37:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:44.408 { 00:08:44.408 "nbd_device": "/dev/nbd0", 00:08:44.408 "bdev_name": "Malloc0" 00:08:44.408 }, 00:08:44.408 { 00:08:44.408 "nbd_device": "/dev/nbd1", 00:08:44.408 "bdev_name": "Malloc1" 00:08:44.408 } 00:08:44.408 ]' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:44.668 { 00:08:44.668 "nbd_device": "/dev/nbd0", 00:08:44.668 "bdev_name": "Malloc0" 00:08:44.668 }, 00:08:44.668 { 00:08:44.668 "nbd_device": "/dev/nbd1", 00:08:44.668 "bdev_name": "Malloc1" 00:08:44.668 } 00:08:44.668 ]' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:44.668 /dev/nbd1' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:44.668 /dev/nbd1' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:44.668 256+0 records in 00:08:44.668 256+0 records out 00:08:44.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122719 s, 85.4 MB/s 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:44.668 256+0 records in 00:08:44.668 256+0 records out 00:08:44.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120464 s, 87.0 MB/s 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:44.668 256+0 records in 00:08:44.668 256+0 records out 00:08:44.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01355 s, 77.4 MB/s 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:44.668 22:37:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.669 22:37:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.669 22:37:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.669 22:37:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:44.669 22:37:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.669 22:37:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.929 22:37:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.930 22:37:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:45.190 22:37:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:45.190 22:37:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:45.450 22:37:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:45.450 [2024-09-30 22:37:12.445026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.710 [2024-09-30 22:37:12.498018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.710 [2024-09-30 22:37:12.498162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.710 [2024-09-30 22:37:12.527314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:45.710 [2024-09-30 22:37:12.527346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:49.066 spdk_app_start Round 1 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 475408 /var/tmp/spdk-nbd.sock 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 475408 ']' 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.066 22:37:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.066 Malloc0 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.066 Malloc1 00:08:49.066 22:37:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.066 22:37:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:49.066 /dev/nbd0 00:08:49.066 22:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:49.066 22:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:49.066 1+0 records in 00:08:49.066 1+0 records out 00:08:49.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285723 s, 14.3 MB/s 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:49.066 22:37:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:49.326 /dev/nbd1 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:49.326 1+0 records in 00:08:49.326 1+0 records out 00:08:49.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287352 s, 14.3 MB/s 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.326 22:37:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.326 22:37:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:49.587 { 00:08:49.587 "nbd_device": "/dev/nbd0", 00:08:49.587 "bdev_name": "Malloc0" 00:08:49.587 }, 00:08:49.587 { 00:08:49.587 "nbd_device": "/dev/nbd1", 00:08:49.587 "bdev_name": "Malloc1" 00:08:49.587 } 00:08:49.587 ]' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:49.587 { 00:08:49.587 "nbd_device": "/dev/nbd0", 00:08:49.587 "bdev_name": "Malloc0" 00:08:49.587 }, 00:08:49.587 { 00:08:49.587 "nbd_device": "/dev/nbd1", 00:08:49.587 "bdev_name": "Malloc1" 00:08:49.587 } 00:08:49.587 ]' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:49.587 /dev/nbd1' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:49.587 /dev/nbd1' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:49.587 22:37:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:49.587 256+0 records in 00:08:49.587 256+0 records out 00:08:49.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127158 s, 82.5 MB/s 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.588 256+0 records in 00:08:49.588 256+0 records out 00:08:49.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122331 s, 85.7 MB/s 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.588 256+0 records in 00:08:49.588 256+0 records out 00:08:49.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132468 s, 79.2 MB/s 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.588 22:37:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.849 22:37:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.109 22:37:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:50.369 22:37:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:50.369 22:37:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:50.630 22:37:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:50.630 [2024-09-30 22:37:17.515148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.630 [2024-09-30 22:37:17.567868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.630 [2024-09-30 22:37:17.567870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.630 [2024-09-30 22:37:17.597689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:50.630 [2024-09-30 22:37:17.597721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:53.928 22:37:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:53.928 22:37:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:53.928 spdk_app_start Round 2 00:08:53.928 22:37:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 475408 /var/tmp/spdk-nbd.sock 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 475408 ']' 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:53.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.928 22:37:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:53.928 22:37:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:53.928 Malloc0 00:08:53.928 22:37:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:53.928 Malloc1 00:08:54.188 22:37:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.188 22:37:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.188 22:37:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.188 22:37:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.189 22:37:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:54.189 /dev/nbd0 00:08:54.189 22:37:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:54.189 22:37:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.189 1+0 records in 00:08:54.189 1+0 records out 00:08:54.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270669 s, 15.1 MB/s 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:54.189 22:37:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:54.189 22:37:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.189 22:37:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.189 22:37:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:54.449 /dev/nbd1 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.449 1+0 records in 00:08:54.449 1+0 records out 00:08:54.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273299 s, 15.0 MB/s 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:54.449 22:37:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.449 22:37:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.709 22:37:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:54.709 { 00:08:54.709 "nbd_device": "/dev/nbd0", 00:08:54.709 "bdev_name": "Malloc0" 00:08:54.709 }, 00:08:54.709 { 00:08:54.710 "nbd_device": "/dev/nbd1", 00:08:54.710 "bdev_name": "Malloc1" 00:08:54.710 } 00:08:54.710 ]' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:54.710 { 00:08:54.710 "nbd_device": "/dev/nbd0", 00:08:54.710 "bdev_name": "Malloc0" 00:08:54.710 }, 00:08:54.710 { 00:08:54.710 "nbd_device": "/dev/nbd1", 00:08:54.710 "bdev_name": "Malloc1" 00:08:54.710 } 00:08:54.710 ]' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:54.710 /dev/nbd1' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:54.710 /dev/nbd1' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:54.710 256+0 records in 00:08:54.710 256+0 records out 00:08:54.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123047 s, 85.2 MB/s 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.710 256+0 records in 00:08:54.710 256+0 records out 00:08:54.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120752 s, 86.8 MB/s 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:54.710 256+0 records in 00:08:54.710 256+0 records out 00:08:54.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128912 s, 81.3 MB/s 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.710 22:37:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.970 22:37:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.231 22:37:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:55.492 22:37:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:55.492 22:37:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:55.753 22:37:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:55.753 [2024-09-30 22:37:22.640230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.753 [2024-09-30 22:37:22.693504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.753 [2024-09-30 22:37:22.693505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.753 [2024-09-30 22:37:22.722957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:55.753 [2024-09-30 22:37:22.722987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:59.052 22:37:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 475408 /var/tmp/spdk-nbd.sock 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 475408 ']' 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:59.052 22:37:25 event.app_repeat -- event/event.sh@39 -- # killprocess 475408 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 475408 ']' 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 475408 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475408 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475408' 00:08:59.052 killing process with pid 475408 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 475408 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 475408 00:08:59.052 spdk_app_start is called in Round 0. 00:08:59.052 Shutdown signal received, stop current app iteration 00:08:59.052 Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 reinitialization... 00:08:59.052 spdk_app_start is called in Round 1. 00:08:59.052 Shutdown signal received, stop current app iteration 00:08:59.052 Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 reinitialization... 00:08:59.052 spdk_app_start is called in Round 2. 00:08:59.052 Shutdown signal received, stop current app iteration 00:08:59.052 Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 reinitialization... 00:08:59.052 spdk_app_start is called in Round 3. 00:08:59.052 Shutdown signal received, stop current app iteration 00:08:59.052 22:37:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:59.052 22:37:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:59.052 00:08:59.052 real 0m16.387s 00:08:59.052 user 0m35.762s 00:08:59.052 sys 0m2.306s 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.052 22:37:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.052 ************************************ 00:08:59.052 END TEST app_repeat 00:08:59.052 ************************************ 00:08:59.052 22:37:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:59.052 22:37:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:59.052 22:37:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.052 22:37:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.052 22:37:25 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.052 ************************************ 00:08:59.052 START TEST cpu_locks 00:08:59.052 ************************************ 00:08:59.052 22:37:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:59.052 * Looking for test storage... 00:08:59.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.314 22:37:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:59.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.314 --rc genhtml_branch_coverage=1 00:08:59.314 --rc genhtml_function_coverage=1 00:08:59.314 --rc genhtml_legend=1 00:08:59.314 --rc geninfo_all_blocks=1 00:08:59.314 --rc geninfo_unexecuted_blocks=1 00:08:59.314 00:08:59.314 ' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:59.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.314 --rc genhtml_branch_coverage=1 00:08:59.314 --rc genhtml_function_coverage=1 00:08:59.314 --rc genhtml_legend=1 00:08:59.314 --rc geninfo_all_blocks=1 00:08:59.314 --rc geninfo_unexecuted_blocks=1 00:08:59.314 00:08:59.314 ' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:59.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.314 --rc genhtml_branch_coverage=1 00:08:59.314 --rc genhtml_function_coverage=1 00:08:59.314 --rc genhtml_legend=1 00:08:59.314 --rc geninfo_all_blocks=1 00:08:59.314 --rc geninfo_unexecuted_blocks=1 00:08:59.314 00:08:59.314 ' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:59.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.314 --rc genhtml_branch_coverage=1 00:08:59.314 --rc genhtml_function_coverage=1 00:08:59.314 --rc genhtml_legend=1 00:08:59.314 --rc geninfo_all_blocks=1 00:08:59.314 --rc geninfo_unexecuted_blocks=1 00:08:59.314 00:08:59.314 ' 00:08:59.314 22:37:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:59.314 22:37:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:59.314 22:37:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:59.314 22:37:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.314 22:37:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.314 ************************************ 00:08:59.314 START TEST default_locks 00:08:59.314 ************************************ 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=478934 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 478934 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 478934 ']' 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.314 22:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.314 [2024-09-30 22:37:26.278186] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:08:59.314 [2024-09-30 22:37:26.278249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478934 ] 00:08:59.575 [2024-09-30 22:37:26.361476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.575 [2024-09-30 22:37:26.432521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.148 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.148 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:00.148 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 478934 00:09:00.148 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 478934 00:09:00.148 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.720 lslocks: write error 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 478934 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 478934 ']' 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 478934 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478934 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478934' 00:09:00.720 killing process with pid 478934 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 478934 00:09:00.720 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 478934 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 478934 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 478934 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 478934 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 478934 ']' 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:00.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (478934) - No such process 00:09:00.981 ERROR: process (pid: 478934) is no longer running 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:00.981 00:09:00.981 real 0m1.531s 00:09:00.981 user 0m1.646s 00:09:00.981 sys 0m0.549s 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.981 22:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:00.981 ************************************ 00:09:00.981 END TEST default_locks 00:09:00.981 ************************************ 00:09:00.981 22:37:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:00.981 22:37:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.982 22:37:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.982 22:37:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:00.982 ************************************ 00:09:00.982 START TEST default_locks_via_rpc 00:09:00.982 ************************************ 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=479270 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 479270 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 479270 ']' 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.982 22:37:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.982 [2024-09-30 22:37:27.887031] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:00.982 [2024-09-30 22:37:27.887090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479270 ] 00:09:00.982 [2024-09-30 22:37:27.968545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.242 [2024-09-30 22:37:28.028746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 479270 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 479270 00:09:01.814 22:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 479270 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 479270 ']' 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 479270 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479270 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479270' 00:09:02.386 killing process with pid 479270 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 479270 00:09:02.386 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 479270 00:09:02.646 00:09:02.646 real 0m1.590s 00:09:02.646 user 0m1.697s 00:09:02.646 sys 0m0.565s 00:09:02.646 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.646 22:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.646 ************************************ 00:09:02.646 END TEST default_locks_via_rpc 00:09:02.646 ************************************ 00:09:02.646 22:37:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:02.646 22:37:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.646 22:37:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.646 22:37:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.646 ************************************ 00:09:02.646 START TEST non_locking_app_on_locked_coremask 00:09:02.646 ************************************ 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=479625 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 479625 /var/tmp/spdk.sock 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 479625 ']' 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.646 22:37:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.646 [2024-09-30 22:37:29.559493] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:02.646 [2024-09-30 22:37:29.559553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479625 ] 00:09:02.646 [2024-09-30 22:37:29.639389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.907 [2024-09-30 22:37:29.700862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=479799 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 479799 /var/tmp/spdk2.sock 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 479799 ']' 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:03.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.478 22:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.478 [2024-09-30 22:37:30.413295] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:03.478 [2024-09-30 22:37:30.413348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479799 ] 00:09:03.478 [2024-09-30 22:37:30.487739] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:03.478 [2024-09-30 22:37:30.487766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.739 [2024-09-30 22:37:30.597793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.311 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.311 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:04.311 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 479625 00:09:04.311 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 479625 00:09:04.311 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:04.880 lslocks: write error 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 479625 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 479625 ']' 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 479625 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479625 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479625' 00:09:04.880 killing process with pid 479625 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 479625 00:09:04.880 22:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 479625 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 479799 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 479799 ']' 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 479799 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479799 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479799' 00:09:05.141 killing process with pid 479799 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 479799 00:09:05.141 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 479799 00:09:05.402 00:09:05.402 real 0m2.834s 00:09:05.402 user 0m3.164s 00:09:05.402 sys 0m0.858s 00:09:05.402 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.402 22:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 ************************************ 00:09:05.402 END TEST non_locking_app_on_locked_coremask 00:09:05.402 ************************************ 00:09:05.402 22:37:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:05.402 22:37:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.402 22:37:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.402 22:37:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 ************************************ 00:09:05.402 START TEST locking_app_on_unlocked_coremask 00:09:05.402 ************************************ 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=480178 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 480178 /var/tmp/spdk.sock 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 480178 ']' 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.402 22:37:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.663 [2024-09-30 22:37:32.463746] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:05.663 [2024-09-30 22:37:32.463797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480178 ] 00:09:05.663 [2024-09-30 22:37:32.538519] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:05.663 [2024-09-30 22:37:32.538552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.663 [2024-09-30 22:37:32.595416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=480509 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 480509 /var/tmp/spdk2.sock 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 480509 ']' 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:06.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.604 22:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.604 [2024-09-30 22:37:33.322416] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:06.604 [2024-09-30 22:37:33.322471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480509 ] 00:09:06.604 [2024-09-30 22:37:33.398031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.604 [2024-09-30 22:37:33.504721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.176 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.177 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:07.177 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 480509 00:09:07.177 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 480509 00:09:07.177 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:07.749 lslocks: write error 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 480178 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 480178 ']' 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 480178 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480178 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480178' 00:09:07.749 killing process with pid 480178 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 480178 00:09:07.749 22:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 480178 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 480509 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 480509 ']' 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 480509 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480509 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480509' 00:09:08.320 killing process with pid 480509 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 480509 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 480509 00:09:08.320 00:09:08.320 real 0m2.918s 00:09:08.320 user 0m3.220s 00:09:08.320 sys 0m0.917s 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.320 22:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.320 ************************************ 00:09:08.320 END TEST locking_app_on_unlocked_coremask 00:09:08.320 ************************************ 00:09:08.582 22:37:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:08.582 22:37:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.582 22:37:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.582 22:37:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.582 ************************************ 00:09:08.582 START TEST locking_app_on_locked_coremask 00:09:08.582 ************************************ 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=480884 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 480884 /var/tmp/spdk.sock 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 480884 ']' 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.582 22:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.582 [2024-09-30 22:37:35.459556] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:08.582 [2024-09-30 22:37:35.459605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480884 ] 00:09:08.582 [2024-09-30 22:37:35.536272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.582 [2024-09-30 22:37:35.591581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=481070 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 481070 /var/tmp/spdk2.sock 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:09.524 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 481070 /var/tmp/spdk2.sock 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 481070 /var/tmp/spdk2.sock 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 481070 ']' 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:09.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.525 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.525 [2024-09-30 22:37:36.305448] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:09.525 [2024-09-30 22:37:36.305501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481070 ] 00:09:09.525 [2024-09-30 22:37:36.379083] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 480884 has claimed it. 00:09:09.525 [2024-09-30 22:37:36.379120] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:10.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (481070) - No such process 00:09:10.098 ERROR: process (pid: 481070) is no longer running 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 480884 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 480884 00:09:10.098 22:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.669 lslocks: write error 00:09:10.669 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 480884 00:09:10.669 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 480884 ']' 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 480884 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480884 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480884' 00:09:10.670 killing process with pid 480884 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 480884 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 480884 00:09:10.670 00:09:10.670 real 0m2.280s 00:09:10.670 user 0m2.573s 00:09:10.670 sys 0m0.661s 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.670 22:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.670 ************************************ 00:09:10.670 END TEST locking_app_on_locked_coremask 00:09:10.670 ************************************ 00:09:10.931 22:37:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:10.931 22:37:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.931 22:37:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.931 22:37:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.931 ************************************ 00:09:10.931 START TEST locking_overlapped_coremask 00:09:10.931 ************************************ 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=481338 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 481338 /var/tmp/spdk.sock 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 481338 ']' 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.931 22:37:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.932 [2024-09-30 22:37:37.821539] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:10.932 [2024-09-30 22:37:37.821605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481338 ] 00:09:10.932 [2024-09-30 22:37:37.904321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.194 [2024-09-30 22:37:37.976844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.194 [2024-09-30 22:37:37.977003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.194 [2024-09-30 22:37:37.977158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=481597 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 481597 /var/tmp/spdk2.sock 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 481597 /var/tmp/spdk2.sock 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 481597 /var/tmp/spdk2.sock 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 481597 ']' 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:11.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.767 22:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.767 [2024-09-30 22:37:38.659066] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:11.767 [2024-09-30 22:37:38.659116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481597 ] 00:09:11.767 [2024-09-30 22:37:38.752210] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 481338 has claimed it. 00:09:11.767 [2024-09-30 22:37:38.752252] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:12.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (481597) - No such process 00:09:12.338 ERROR: process (pid: 481597) is no longer running 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 481338 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 481338 ']' 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 481338 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481338 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481338' 00:09:12.338 killing process with pid 481338 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 481338 00:09:12.338 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 481338 00:09:12.599 00:09:12.599 real 0m1.802s 00:09:12.599 user 0m5.062s 00:09:12.599 sys 0m0.420s 00:09:12.599 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.599 22:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:12.599 ************************************ 00:09:12.599 END TEST locking_overlapped_coremask 00:09:12.599 ************************************ 00:09:12.599 22:37:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:12.599 22:37:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.599 22:37:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.599 22:37:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.860 ************************************ 00:09:12.860 START TEST locking_overlapped_coremask_via_rpc 00:09:12.860 ************************************ 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=481808 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 481808 /var/tmp/spdk.sock 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 481808 ']' 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.860 22:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.860 [2024-09-30 22:37:39.689712] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:12.860 [2024-09-30 22:37:39.689772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481808 ] 00:09:12.860 [2024-09-30 22:37:39.770481] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:12.860 [2024-09-30 22:37:39.770527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.860 [2024-09-30 22:37:39.845572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.860 [2024-09-30 22:37:39.845725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.860 [2024-09-30 22:37:39.845727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=481974 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 481974 /var/tmp/spdk2.sock 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 481974 ']' 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.802 22:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.802 [2024-09-30 22:37:40.551088] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:13.802 [2024-09-30 22:37:40.551142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481974 ] 00:09:13.802 [2024-09-30 22:37:40.644135] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:13.802 [2024-09-30 22:37:40.644167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.802 [2024-09-30 22:37:40.772441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.802 [2024-09-30 22:37:40.776017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.802 [2024-09-30 22:37:40.776019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.373 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.373 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:14.373 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:14.373 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.373 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 [2024-09-30 22:37:41.344971] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 481808 has claimed it. 00:09:14.374 request: 00:09:14.374 { 00:09:14.374 "method": "framework_enable_cpumask_locks", 00:09:14.374 "req_id": 1 00:09:14.374 } 00:09:14.374 Got JSON-RPC error response 00:09:14.374 response: 00:09:14.374 { 00:09:14.374 "code": -32603, 00:09:14.374 "message": "Failed to claim CPU core: 2" 00:09:14.374 } 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 481808 /var/tmp/spdk.sock 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 481808 ']' 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.374 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 481974 /var/tmp/spdk2.sock 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 481974 ']' 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:14.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.635 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:14.896 00:09:14.896 real 0m2.089s 00:09:14.896 user 0m0.866s 00:09:14.896 sys 0m0.152s 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.896 22:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.896 ************************************ 00:09:14.896 END TEST locking_overlapped_coremask_via_rpc 00:09:14.896 ************************************ 00:09:14.896 22:37:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:14.896 22:37:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 481808 ]] 00:09:14.896 22:37:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 481808 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 481808 ']' 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 481808 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481808 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481808' 00:09:14.896 killing process with pid 481808 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 481808 00:09:14.896 22:37:41 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 481808 00:09:15.157 22:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 481974 ]] 00:09:15.157 22:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 481974 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 481974 ']' 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 481974 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481974 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481974' 00:09:15.157 killing process with pid 481974 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 481974 00:09:15.157 22:37:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 481974 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 481808 ]] 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 481808 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 481808 ']' 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 481808 00:09:15.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (481808) - No such process 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 481808 is not found' 00:09:15.417 Process with pid 481808 is not found 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 481974 ]] 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 481974 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 481974 ']' 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 481974 00:09:15.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (481974) - No such process 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 481974 is not found' 00:09:15.417 Process with pid 481974 is not found 00:09:15.417 22:37:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:15.417 00:09:15.417 real 0m16.342s 00:09:15.417 user 0m28.132s 00:09:15.417 sys 0m5.134s 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.417 22:37:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.417 ************************************ 00:09:15.417 END TEST cpu_locks 00:09:15.417 ************************************ 00:09:15.417 00:09:15.417 real 0m43.153s 00:09:15.417 user 1m24.859s 00:09:15.417 sys 0m8.591s 00:09:15.417 22:37:42 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.417 22:37:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:15.417 ************************************ 00:09:15.417 END TEST event 00:09:15.417 ************************************ 00:09:15.417 22:37:42 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:15.417 22:37:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.417 22:37:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.417 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:09:15.417 ************************************ 00:09:15.417 START TEST thread 00:09:15.417 ************************************ 00:09:15.417 22:37:42 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:15.679 * Looking for test storage... 00:09:15.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:15.679 22:37:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.679 22:37:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.679 22:37:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.679 22:37:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.679 22:37:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.679 22:37:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.679 22:37:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.679 22:37:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.679 22:37:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.679 22:37:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.679 22:37:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.679 22:37:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:15.679 22:37:42 thread -- scripts/common.sh@345 -- # : 1 00:09:15.679 22:37:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.679 22:37:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.679 22:37:42 thread -- scripts/common.sh@365 -- # decimal 1 00:09:15.679 22:37:42 thread -- scripts/common.sh@353 -- # local d=1 00:09:15.679 22:37:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.679 22:37:42 thread -- scripts/common.sh@355 -- # echo 1 00:09:15.679 22:37:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.679 22:37:42 thread -- scripts/common.sh@366 -- # decimal 2 00:09:15.679 22:37:42 thread -- scripts/common.sh@353 -- # local d=2 00:09:15.679 22:37:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.679 22:37:42 thread -- scripts/common.sh@355 -- # echo 2 00:09:15.679 22:37:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.679 22:37:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.679 22:37:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.679 22:37:42 thread -- scripts/common.sh@368 -- # return 0 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.679 --rc genhtml_branch_coverage=1 00:09:15.679 --rc genhtml_function_coverage=1 00:09:15.679 --rc genhtml_legend=1 00:09:15.679 --rc geninfo_all_blocks=1 00:09:15.679 --rc geninfo_unexecuted_blocks=1 00:09:15.679 00:09:15.679 ' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.679 --rc genhtml_branch_coverage=1 00:09:15.679 --rc genhtml_function_coverage=1 00:09:15.679 --rc genhtml_legend=1 00:09:15.679 --rc geninfo_all_blocks=1 00:09:15.679 --rc geninfo_unexecuted_blocks=1 00:09:15.679 00:09:15.679 ' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.679 --rc genhtml_branch_coverage=1 00:09:15.679 --rc genhtml_function_coverage=1 00:09:15.679 --rc genhtml_legend=1 00:09:15.679 --rc geninfo_all_blocks=1 00:09:15.679 --rc geninfo_unexecuted_blocks=1 00:09:15.679 00:09:15.679 ' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.679 --rc genhtml_branch_coverage=1 00:09:15.679 --rc genhtml_function_coverage=1 00:09:15.679 --rc genhtml_legend=1 00:09:15.679 --rc geninfo_all_blocks=1 00:09:15.679 --rc geninfo_unexecuted_blocks=1 00:09:15.679 00:09:15.679 ' 00:09:15.679 22:37:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.679 22:37:42 thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.679 ************************************ 00:09:15.679 START TEST thread_poller_perf 00:09:15.679 ************************************ 00:09:15.679 22:37:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:15.679 [2024-09-30 22:37:42.692170] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:15.679 [2024-09-30 22:37:42.692272] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482435 ] 00:09:15.940 [2024-09-30 22:37:42.780568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.940 [2024-09-30 22:37:42.851276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.940 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:17.380 ====================================== 00:09:17.380 busy:2405495712 (cyc) 00:09:17.380 total_run_count: 417000 00:09:17.380 tsc_hz: 2400000000 (cyc) 00:09:17.380 ====================================== 00:09:17.380 poller_cost: 5768 (cyc), 2403 (nsec) 00:09:17.380 00:09:17.380 real 0m1.231s 00:09:17.380 user 0m1.117s 00:09:17.380 sys 0m0.109s 00:09:17.380 22:37:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.380 22:37:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:17.380 ************************************ 00:09:17.380 END TEST thread_poller_perf 00:09:17.380 ************************************ 00:09:17.380 22:37:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:17.380 22:37:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:17.380 22:37:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.380 22:37:43 thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.380 ************************************ 00:09:17.380 START TEST thread_poller_perf 00:09:17.380 ************************************ 00:09:17.380 22:37:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:17.380 [2024-09-30 22:37:44.002841] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:17.380 [2024-09-30 22:37:44.002966] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482772 ] 00:09:17.380 [2024-09-30 22:37:44.089526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.380 [2024-09-30 22:37:44.148405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.380 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:18.338 ====================================== 00:09:18.338 busy:2401476766 (cyc) 00:09:18.338 total_run_count: 5543000 00:09:18.338 tsc_hz: 2400000000 (cyc) 00:09:18.338 ====================================== 00:09:18.338 poller_cost: 433 (cyc), 180 (nsec) 00:09:18.338 00:09:18.338 real 0m1.213s 00:09:18.338 user 0m1.113s 00:09:18.338 sys 0m0.095s 00:09:18.338 22:37:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.338 22:37:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:18.338 ************************************ 00:09:18.338 END TEST thread_poller_perf 00:09:18.338 ************************************ 00:09:18.338 22:37:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:18.338 00:09:18.338 real 0m2.798s 00:09:18.338 user 0m2.405s 00:09:18.338 sys 0m0.404s 00:09:18.338 22:37:45 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.338 22:37:45 thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.338 ************************************ 00:09:18.338 END TEST thread 00:09:18.338 ************************************ 00:09:18.338 22:37:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:18.338 22:37:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:18.338 22:37:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.338 22:37:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.338 22:37:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.338 ************************************ 00:09:18.338 START TEST app_cmdline 00:09:18.338 ************************************ 00:09:18.339 22:37:45 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:18.600 * Looking for test storage... 00:09:18.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.600 22:37:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:18.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.600 --rc genhtml_branch_coverage=1 00:09:18.600 --rc genhtml_function_coverage=1 00:09:18.600 --rc genhtml_legend=1 00:09:18.600 --rc geninfo_all_blocks=1 00:09:18.600 --rc geninfo_unexecuted_blocks=1 00:09:18.600 00:09:18.600 ' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:18.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.600 --rc genhtml_branch_coverage=1 00:09:18.600 --rc genhtml_function_coverage=1 00:09:18.600 --rc genhtml_legend=1 00:09:18.600 --rc geninfo_all_blocks=1 00:09:18.600 --rc geninfo_unexecuted_blocks=1 00:09:18.600 00:09:18.600 ' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:18.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.600 --rc genhtml_branch_coverage=1 00:09:18.600 --rc genhtml_function_coverage=1 00:09:18.600 --rc genhtml_legend=1 00:09:18.600 --rc geninfo_all_blocks=1 00:09:18.600 --rc geninfo_unexecuted_blocks=1 00:09:18.600 00:09:18.600 ' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:18.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.600 --rc genhtml_branch_coverage=1 00:09:18.600 --rc genhtml_function_coverage=1 00:09:18.600 --rc genhtml_legend=1 00:09:18.600 --rc geninfo_all_blocks=1 00:09:18.600 --rc geninfo_unexecuted_blocks=1 00:09:18.600 00:09:18.600 ' 00:09:18.600 22:37:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:18.600 22:37:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=483178 00:09:18.600 22:37:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 483178 00:09:18.600 22:37:45 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 483178 ']' 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.600 22:37:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:18.600 [2024-09-30 22:37:45.560203] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:18.600 [2024-09-30 22:37:45.560276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483178 ] 00:09:18.861 [2024-09-30 22:37:45.641985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.861 [2024-09-30 22:37:45.702914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.432 22:37:46 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.432 22:37:46 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:19.432 22:37:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:19.694 { 00:09:19.694 "version": "SPDK v25.01-pre git sha1 310cb0643", 00:09:19.694 "fields": { 00:09:19.694 "major": 25, 00:09:19.694 "minor": 1, 00:09:19.694 "patch": 0, 00:09:19.694 "suffix": "-pre", 00:09:19.694 "commit": "310cb0643" 00:09:19.694 } 00:09:19.694 } 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:19.694 22:37:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:19.694 22:37:46 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:19.956 request: 00:09:19.956 { 00:09:19.956 "method": "env_dpdk_get_mem_stats", 00:09:19.956 "req_id": 1 00:09:19.956 } 00:09:19.956 Got JSON-RPC error response 00:09:19.956 response: 00:09:19.956 { 00:09:19.956 "code": -32601, 00:09:19.956 "message": "Method not found" 00:09:19.956 } 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.956 22:37:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 483178 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 483178 ']' 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 483178 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483178 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483178' 00:09:19.956 killing process with pid 483178 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@969 -- # kill 483178 00:09:19.956 22:37:46 app_cmdline -- common/autotest_common.sh@974 -- # wait 483178 00:09:20.216 00:09:20.216 real 0m1.685s 00:09:20.216 user 0m1.983s 00:09:20.216 sys 0m0.463s 00:09:20.216 22:37:46 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.216 22:37:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 ************************************ 00:09:20.216 END TEST app_cmdline 00:09:20.216 ************************************ 00:09:20.216 22:37:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:20.216 22:37:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.216 22:37:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.216 22:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 ************************************ 00:09:20.216 START TEST version 00:09:20.216 ************************************ 00:09:20.216 22:37:47 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:20.216 * Looking for test storage... 00:09:20.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:20.216 22:37:47 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.216 22:37:47 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.216 22:37:47 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.475 22:37:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.475 22:37:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.475 22:37:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.475 22:37:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.475 22:37:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.475 22:37:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.475 22:37:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.475 22:37:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.475 22:37:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.475 22:37:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.475 22:37:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.475 22:37:47 version -- scripts/common.sh@344 -- # case "$op" in 00:09:20.475 22:37:47 version -- scripts/common.sh@345 -- # : 1 00:09:20.475 22:37:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.475 22:37:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.475 22:37:47 version -- scripts/common.sh@365 -- # decimal 1 00:09:20.475 22:37:47 version -- scripts/common.sh@353 -- # local d=1 00:09:20.475 22:37:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.475 22:37:47 version -- scripts/common.sh@355 -- # echo 1 00:09:20.475 22:37:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.475 22:37:47 version -- scripts/common.sh@366 -- # decimal 2 00:09:20.475 22:37:47 version -- scripts/common.sh@353 -- # local d=2 00:09:20.475 22:37:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.475 22:37:47 version -- scripts/common.sh@355 -- # echo 2 00:09:20.475 22:37:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.475 22:37:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.475 22:37:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.475 22:37:47 version -- scripts/common.sh@368 -- # return 0 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.475 --rc genhtml_branch_coverage=1 00:09:20.475 --rc genhtml_function_coverage=1 00:09:20.475 --rc genhtml_legend=1 00:09:20.475 --rc geninfo_all_blocks=1 00:09:20.475 --rc geninfo_unexecuted_blocks=1 00:09:20.475 00:09:20.475 ' 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.475 --rc genhtml_branch_coverage=1 00:09:20.475 --rc genhtml_function_coverage=1 00:09:20.475 --rc genhtml_legend=1 00:09:20.475 --rc geninfo_all_blocks=1 00:09:20.475 --rc geninfo_unexecuted_blocks=1 00:09:20.475 00:09:20.475 ' 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.475 --rc genhtml_branch_coverage=1 00:09:20.475 --rc genhtml_function_coverage=1 00:09:20.475 --rc genhtml_legend=1 00:09:20.475 --rc geninfo_all_blocks=1 00:09:20.475 --rc geninfo_unexecuted_blocks=1 00:09:20.475 00:09:20.475 ' 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.475 --rc genhtml_branch_coverage=1 00:09:20.475 --rc genhtml_function_coverage=1 00:09:20.475 --rc genhtml_legend=1 00:09:20.475 --rc geninfo_all_blocks=1 00:09:20.475 --rc geninfo_unexecuted_blocks=1 00:09:20.475 00:09:20.475 ' 00:09:20.475 22:37:47 version -- app/version.sh@17 -- # get_header_version major 00:09:20.475 22:37:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # cut -f2 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.475 22:37:47 version -- app/version.sh@17 -- # major=25 00:09:20.475 22:37:47 version -- app/version.sh@18 -- # get_header_version minor 00:09:20.475 22:37:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # cut -f2 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.475 22:37:47 version -- app/version.sh@18 -- # minor=1 00:09:20.475 22:37:47 version -- app/version.sh@19 -- # get_header_version patch 00:09:20.475 22:37:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # cut -f2 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.475 22:37:47 version -- app/version.sh@19 -- # patch=0 00:09:20.475 22:37:47 version -- app/version.sh@20 -- # get_header_version suffix 00:09:20.475 22:37:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.475 22:37:47 version -- app/version.sh@14 -- # cut -f2 00:09:20.475 22:37:47 version -- app/version.sh@20 -- # suffix=-pre 00:09:20.475 22:37:47 version -- app/version.sh@22 -- # version=25.1 00:09:20.475 22:37:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:20.475 22:37:47 version -- app/version.sh@28 -- # version=25.1rc0 00:09:20.475 22:37:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:20.475 22:37:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:20.475 22:37:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:20.475 22:37:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:20.475 00:09:20.475 real 0m0.281s 00:09:20.475 user 0m0.165s 00:09:20.475 sys 0m0.164s 00:09:20.475 22:37:47 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.475 22:37:47 version -- common/autotest_common.sh@10 -- # set +x 00:09:20.475 ************************************ 00:09:20.475 END TEST version 00:09:20.475 ************************************ 00:09:20.475 22:37:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:20.475 22:37:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:20.475 22:37:47 -- spdk/autotest.sh@194 -- # uname -s 00:09:20.475 22:37:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:20.475 22:37:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:20.475 22:37:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:20.475 22:37:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:20.475 22:37:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:20.475 22:37:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:20.475 22:37:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.476 22:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.476 22:37:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:20.476 22:37:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:20.476 22:37:47 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:20.476 22:37:47 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:20.476 22:37:47 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:20.476 22:37:47 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:20.476 22:37:47 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:20.476 22:37:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.476 22:37:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.476 22:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.476 ************************************ 00:09:20.476 START TEST nvmf_tcp 00:09:20.476 ************************************ 00:09:20.476 22:37:47 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:20.735 * Looking for test storage... 00:09:20.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:20.735 22:37:47 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.735 22:37:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.735 22:37:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.735 22:37:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.735 22:37:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.736 22:37:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.736 22:37:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.736 --rc genhtml_branch_coverage=1 00:09:20.736 --rc genhtml_function_coverage=1 00:09:20.736 --rc genhtml_legend=1 00:09:20.736 --rc geninfo_all_blocks=1 00:09:20.736 --rc geninfo_unexecuted_blocks=1 00:09:20.736 00:09:20.736 ' 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.736 --rc genhtml_branch_coverage=1 00:09:20.736 --rc genhtml_function_coverage=1 00:09:20.736 --rc genhtml_legend=1 00:09:20.736 --rc geninfo_all_blocks=1 00:09:20.736 --rc geninfo_unexecuted_blocks=1 00:09:20.736 00:09:20.736 ' 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.736 --rc genhtml_branch_coverage=1 00:09:20.736 --rc genhtml_function_coverage=1 00:09:20.736 --rc genhtml_legend=1 00:09:20.736 --rc geninfo_all_blocks=1 00:09:20.736 --rc geninfo_unexecuted_blocks=1 00:09:20.736 00:09:20.736 ' 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.736 --rc genhtml_branch_coverage=1 00:09:20.736 --rc genhtml_function_coverage=1 00:09:20.736 --rc genhtml_legend=1 00:09:20.736 --rc geninfo_all_blocks=1 00:09:20.736 --rc geninfo_unexecuted_blocks=1 00:09:20.736 00:09:20.736 ' 00:09:20.736 22:37:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:20.736 22:37:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:20.736 22:37:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.736 22:37:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.736 ************************************ 00:09:20.736 START TEST nvmf_target_core 00:09:20.736 ************************************ 00:09:20.736 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:20.997 * Looking for test storage... 00:09:20.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.997 --rc genhtml_branch_coverage=1 00:09:20.997 --rc genhtml_function_coverage=1 00:09:20.997 --rc genhtml_legend=1 00:09:20.997 --rc geninfo_all_blocks=1 00:09:20.997 --rc geninfo_unexecuted_blocks=1 00:09:20.997 00:09:20.997 ' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.997 --rc genhtml_branch_coverage=1 00:09:20.997 --rc genhtml_function_coverage=1 00:09:20.997 --rc genhtml_legend=1 00:09:20.997 --rc geninfo_all_blocks=1 00:09:20.997 --rc geninfo_unexecuted_blocks=1 00:09:20.997 00:09:20.997 ' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.997 --rc genhtml_branch_coverage=1 00:09:20.997 --rc genhtml_function_coverage=1 00:09:20.997 --rc genhtml_legend=1 00:09:20.997 --rc geninfo_all_blocks=1 00:09:20.997 --rc geninfo_unexecuted_blocks=1 00:09:20.997 00:09:20.997 ' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.997 --rc genhtml_branch_coverage=1 00:09:20.997 --rc genhtml_function_coverage=1 00:09:20.997 --rc genhtml_legend=1 00:09:20.997 --rc geninfo_all_blocks=1 00:09:20.997 --rc geninfo_unexecuted_blocks=1 00:09:20.997 00:09:20.997 ' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:20.997 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.998 22:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.998 ************************************ 00:09:20.998 START TEST nvmf_abort 00:09:20.998 ************************************ 00:09:20.998 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.259 * Looking for test storage... 00:09:21.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:21.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.259 --rc genhtml_branch_coverage=1 00:09:21.259 --rc genhtml_function_coverage=1 00:09:21.259 --rc genhtml_legend=1 00:09:21.259 --rc geninfo_all_blocks=1 00:09:21.259 --rc geninfo_unexecuted_blocks=1 00:09:21.259 00:09:21.259 ' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:21.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.259 --rc genhtml_branch_coverage=1 00:09:21.259 --rc genhtml_function_coverage=1 00:09:21.259 --rc genhtml_legend=1 00:09:21.259 --rc geninfo_all_blocks=1 00:09:21.259 --rc geninfo_unexecuted_blocks=1 00:09:21.259 00:09:21.259 ' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:21.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.259 --rc genhtml_branch_coverage=1 00:09:21.259 --rc genhtml_function_coverage=1 00:09:21.259 --rc genhtml_legend=1 00:09:21.259 --rc geninfo_all_blocks=1 00:09:21.259 --rc geninfo_unexecuted_blocks=1 00:09:21.259 00:09:21.259 ' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:21.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.259 --rc genhtml_branch_coverage=1 00:09:21.259 --rc genhtml_function_coverage=1 00:09:21.259 --rc genhtml_legend=1 00:09:21.259 --rc geninfo_all_blocks=1 00:09:21.259 --rc geninfo_unexecuted_blocks=1 00:09:21.259 00:09:21.259 ' 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.259 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.260 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:29.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:29.399 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:29.399 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:29.400 Found net devices under 0000:31:00.0: cvl_0_0 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:29.400 Found net devices under 0000:31:00.1: cvl_0_1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:09:29.400 00:09:29.400 --- 10.0.0.2 ping statistics --- 00:09:29.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.400 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:09:29.400 00:09:29.400 --- 10.0.0.1 ping statistics --- 00:09:29.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.400 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:29.400 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=487733 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 487733 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 487733 ']' 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.400 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.400 [2024-09-30 22:37:56.074020] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:29.400 [2024-09-30 22:37:56.074083] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.400 [2024-09-30 22:37:56.167472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.400 [2024-09-30 22:37:56.264102] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.400 [2024-09-30 22:37:56.264169] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.400 [2024-09-30 22:37:56.264182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.400 [2024-09-30 22:37:56.264189] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.400 [2024-09-30 22:37:56.264195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.400 [2024-09-30 22:37:56.264387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.400 [2024-09-30 22:37:56.264546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.400 [2024-09-30 22:37:56.264546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.972 [2024-09-30 22:37:56.946949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.972 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 Malloc0 00:09:30.233 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.233 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 Delay0 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 [2024-09-30 22:37:57.037906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:30.233 [2024-09-30 22:37:57.167071] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:32.789 Initializing NVMe Controllers 00:09:32.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:32.789 controller IO queue size 128 less than required 00:09:32.789 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:32.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:32.789 Initialization complete. Launching workers. 00:09:32.789 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28550 00:09:32.789 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28611, failed to submit 62 00:09:32.789 success 28554, unsuccessful 57, failed 0 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.789 rmmod nvme_tcp 00:09:32.789 rmmod nvme_fabrics 00:09:32.789 rmmod nvme_keyring 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 487733 ']' 00:09:32.789 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 487733 ']' 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487733' 00:09:32.790 killing process with pid 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 487733 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.790 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.704 00:09:34.704 real 0m13.568s 00:09:34.704 user 0m13.668s 00:09:34.704 sys 0m6.728s 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:34.704 ************************************ 00:09:34.704 END TEST nvmf_abort 00:09:34.704 ************************************ 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.704 ************************************ 00:09:34.704 START TEST nvmf_ns_hotplug_stress 00:09:34.704 ************************************ 00:09:34.704 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:34.966 * Looking for test storage... 00:09:34.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.966 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.967 --rc genhtml_branch_coverage=1 00:09:34.967 --rc genhtml_function_coverage=1 00:09:34.967 --rc genhtml_legend=1 00:09:34.967 --rc geninfo_all_blocks=1 00:09:34.967 --rc geninfo_unexecuted_blocks=1 00:09:34.967 00:09:34.967 ' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.967 --rc genhtml_branch_coverage=1 00:09:34.967 --rc genhtml_function_coverage=1 00:09:34.967 --rc genhtml_legend=1 00:09:34.967 --rc geninfo_all_blocks=1 00:09:34.967 --rc geninfo_unexecuted_blocks=1 00:09:34.967 00:09:34.967 ' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.967 --rc genhtml_branch_coverage=1 00:09:34.967 --rc genhtml_function_coverage=1 00:09:34.967 --rc genhtml_legend=1 00:09:34.967 --rc geninfo_all_blocks=1 00:09:34.967 --rc geninfo_unexecuted_blocks=1 00:09:34.967 00:09:34.967 ' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.967 --rc genhtml_branch_coverage=1 00:09:34.967 --rc genhtml_function_coverage=1 00:09:34.967 --rc genhtml_legend=1 00:09:34.967 --rc geninfo_all_blocks=1 00:09:34.967 --rc geninfo_unexecuted_blocks=1 00:09:34.967 00:09:34.967 ' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.967 22:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.108 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:43.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:43.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:43.109 Found net devices under 0000:31:00.0: cvl_0_0 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:43.109 Found net devices under 0000:31:00.1: cvl_0_1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:09:43.109 00:09:43.109 --- 10.0.0.2 ping statistics --- 00:09:43.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.109 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:43.109 00:09:43.109 --- 10.0.0.1 ping statistics --- 00:09:43.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.109 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:09:43.109 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=492835 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 492835 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 492835 ']' 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.110 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.110 [2024-09-30 22:38:09.717774] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:09:43.110 [2024-09-30 22:38:09.717836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.110 [2024-09-30 22:38:09.814811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.110 [2024-09-30 22:38:09.909934] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.110 [2024-09-30 22:38:09.910003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.110 [2024-09-30 22:38:09.910012] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.110 [2024-09-30 22:38:09.910019] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.110 [2024-09-30 22:38:09.910025] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.110 [2024-09-30 22:38:09.910198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.110 [2024-09-30 22:38:09.910501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.110 [2024-09-30 22:38:09.910501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:43.682 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:43.942 [2024-09-30 22:38:10.761611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.942 22:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:44.203 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.203 [2024-09-30 22:38:11.178947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.203 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.464 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:44.725 Malloc0 00:09:44.725 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:44.986 Delay0 00:09:44.986 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.246 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:45.246 NULL1 00:09:45.246 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:45.507 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=493299 00:09:45.507 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:45.507 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:45.507 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.892 Read completed with error (sct=0, sc=11) 00:09:46.892 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.892 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:46.892 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:47.153 true 00:09:47.153 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:47.153 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.093 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.093 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:48.093 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:48.093 true 00:09:48.354 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:48.354 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.354 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.615 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:48.615 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:48.615 true 00:09:48.876 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:48.876 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.817 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.077 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:50.077 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:50.337 true 00:09:50.337 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:50.337 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.279 22:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.279 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:51.279 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:51.539 true 00:09:51.539 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:51.539 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.539 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.800 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:51.800 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:52.062 true 00:09:52.062 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:52.062 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.062 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.323 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:52.323 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:52.584 true 00:09:52.584 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:52.584 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.846 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.846 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:52.846 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:53.106 true 00:09:53.107 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:53.107 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.492 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:54.492 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:54.492 true 00:09:54.492 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:54.492 22:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.433 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.695 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:55.695 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:55.695 true 00:09:55.695 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:55.695 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.956 22:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.217 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:56.217 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:56.217 true 00:09:56.217 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:56.217 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.479 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.741 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:56.741 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:56.741 true 00:09:56.741 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:56.741 22:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.684 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.945 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:57.945 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:57.945 true 00:09:57.945 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:57.945 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.205 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.466 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:58.466 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:58.466 true 00:09:58.752 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:58.752 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.753 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.014 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:59.014 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:59.014 true 00:09:59.275 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:59.275 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.275 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.536 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:59.536 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:59.796 true 00:09:59.796 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:09:59.796 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.736 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.997 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:00.997 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:01.259 true 00:10:01.259 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:01.259 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.200 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.200 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:02.200 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:02.459 true 00:10:02.459 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:02.459 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.459 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.718 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:02.718 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:02.977 true 00:10:02.977 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:02.977 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.977 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.237 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:03.237 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:03.499 true 00:10:03.499 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:03.499 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.499 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.759 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:03.759 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:04.020 true 00:10:04.020 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:04.020 22:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.020 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.281 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:04.281 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:04.542 true 00:10:04.542 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:04.542 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.484 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.484 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:05.484 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:05.744 true 00:10:05.744 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:05.744 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.004 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.004 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:06.004 22:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:06.263 true 00:10:06.263 22:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:06.264 22:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.645 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:07.645 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:07.645 true 00:10:07.645 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:07.645 22:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.586 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.847 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:08.847 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:08.847 true 00:10:08.847 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:08.847 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.115 22:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.456 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:09.456 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:09.456 true 00:10:09.456 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:09.456 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.770 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.770 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:09.770 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:10.055 true 00:10:10.055 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:10.055 22:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.315 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.315 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:10.315 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:10.574 true 00:10:10.574 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:10.574 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.960 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:11.960 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:11.960 true 00:10:11.960 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:11.960 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.901 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.162 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:13.162 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:13.162 true 00:10:13.162 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:13.162 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.423 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:13.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:13.684 true 00:10:13.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:13.684 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.071 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:15.071 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:15.331 true 00:10:15.332 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:15.332 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.274 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.274 Initializing NVMe Controllers 00:10:16.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.274 Controller IO queue size 128, less than required. 00:10:16.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:16.274 Controller IO queue size 128, less than required. 00:10:16.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:16.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:16.274 Initialization complete. Launching workers. 00:10:16.274 ======================================================== 00:10:16.274 Latency(us) 00:10:16.274 Device Information : IOPS MiB/s Average min max 00:10:16.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2363.80 1.15 33115.16 1906.09 1046692.76 00:10:16.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17982.69 8.78 7117.79 1123.39 399828.46 00:10:16.274 ======================================================== 00:10:16.274 Total : 20346.49 9.93 10138.09 1123.39 1046692.76 00:10:16.274 00:10:16.274 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:16.274 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:16.534 true 00:10:16.534 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493299 00:10:16.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (493299) - No such process 00:10:16.534 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 493299 00:10:16.534 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:16.795 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:17.056 null0 00:10:17.056 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.056 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.056 22:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:17.056 null1 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:17.316 null2 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.316 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:17.577 null3 00:10:17.577 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.577 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.577 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:17.838 null4 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:17.838 null5 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.838 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:18.099 null6 00:10:18.099 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:18.099 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:18.099 22:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:18.361 null7 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:18.361 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 500031 500032 500034 500036 500038 500040 500042 500044 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.362 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.624 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.885 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.150 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.412 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.413 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.675 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.937 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.200 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.200 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.200 22:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.200 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.462 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.724 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.987 22:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:21.249 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.511 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:21.773 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:22.034 22:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:22.034 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.034 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:22.034 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.034 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.034 rmmod nvme_tcp 00:10:22.034 rmmod nvme_fabrics 00:10:22.034 rmmod nvme_keyring 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 492835 ']' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 492835 ']' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 492835' 00:10:22.294 killing process with pid 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 492835 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.294 22:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.842 00:10:24.842 real 0m49.643s 00:10:24.842 user 3m12.545s 00:10:24.842 sys 0m15.927s 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.842 ************************************ 00:10:24.842 END TEST nvmf_ns_hotplug_stress 00:10:24.842 ************************************ 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.842 ************************************ 00:10:24.842 START TEST nvmf_delete_subsystem 00:10:24.842 ************************************ 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:24.842 * Looking for test storage... 00:10:24.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:24.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.842 --rc genhtml_branch_coverage=1 00:10:24.842 --rc genhtml_function_coverage=1 00:10:24.842 --rc genhtml_legend=1 00:10:24.842 --rc geninfo_all_blocks=1 00:10:24.842 --rc geninfo_unexecuted_blocks=1 00:10:24.842 00:10:24.842 ' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:24.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.842 --rc genhtml_branch_coverage=1 00:10:24.842 --rc genhtml_function_coverage=1 00:10:24.842 --rc genhtml_legend=1 00:10:24.842 --rc geninfo_all_blocks=1 00:10:24.842 --rc geninfo_unexecuted_blocks=1 00:10:24.842 00:10:24.842 ' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:24.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.842 --rc genhtml_branch_coverage=1 00:10:24.842 --rc genhtml_function_coverage=1 00:10:24.842 --rc genhtml_legend=1 00:10:24.842 --rc geninfo_all_blocks=1 00:10:24.842 --rc geninfo_unexecuted_blocks=1 00:10:24.842 00:10:24.842 ' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:24.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.842 --rc genhtml_branch_coverage=1 00:10:24.842 --rc genhtml_function_coverage=1 00:10:24.842 --rc genhtml_legend=1 00:10:24.842 --rc geninfo_all_blocks=1 00:10:24.842 --rc geninfo_unexecuted_blocks=1 00:10:24.842 00:10:24.842 ' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.842 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.843 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:32.989 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:32.989 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:32.989 Found net devices under 0000:31:00.0: cvl_0_0 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.989 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:32.990 Found net devices under 0000:31:00.1: cvl_0_1 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.990 22:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:10:32.990 00:10:32.990 --- 10.0.0.2 ping statistics --- 00:10:32.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.990 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:32.990 00:10:32.990 --- 10.0.0.1 ping statistics --- 00:10:32.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.990 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=505280 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 505280 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 505280 ']' 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.990 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.990 [2024-09-30 22:38:59.341358] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:10:32.990 [2024-09-30 22:38:59.341425] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.990 [2024-09-30 22:38:59.430892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.990 [2024-09-30 22:38:59.526082] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.990 [2024-09-30 22:38:59.526141] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.990 [2024-09-30 22:38:59.526149] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.990 [2024-09-30 22:38:59.526156] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.990 [2024-09-30 22:38:59.526162] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.990 [2024-09-30 22:38:59.526330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.990 [2024-09-30 22:38:59.526332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.251 [2024-09-30 22:39:00.220714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.251 [2024-09-30 22:39:00.245057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.251 NULL1 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.251 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.512 Delay0 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=505598 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:33.513 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:33.513 [2024-09-30 22:39:00.362087] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:35.427 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.427 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.427 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Write completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 starting I/O failed: -6 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.688 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 [2024-09-30 22:39:02.507989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef390 is same with the state(6) to be set 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 starting I/O failed: -6 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 [2024-09-30 22:39:02.508659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc68000c00 is same with the state(6) to be set 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Read completed with error (sct=0, sc=8) 00:10:35.689 Write completed with error (sct=0, sc=8) 00:10:36.632 [2024-09-30 22:39:03.461131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff06b0 is same with the state(6) to be set 00:10:36.632 Read completed with error (sct=0, sc=8) 00:10:36.632 Read completed with error (sct=0, sc=8) 00:10:36.632 Read completed with error (sct=0, sc=8) 00:10:36.632 Write completed with error (sct=0, sc=8) 00:10:36.632 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 [2024-09-30 22:39:03.508408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef1b0 is same with the state(6) to be set 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 [2024-09-30 22:39:03.508547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef6c0 is same with the state(6) to be set 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 [2024-09-30 22:39:03.508763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc6800cfe0 is same with the state(6) to be set 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Read completed with error (sct=0, sc=8) 00:10:36.633 Write completed with error (sct=0, sc=8) 00:10:36.633 [2024-09-30 22:39:03.508858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc6800d780 is same with the state(6) to be set 00:10:36.633 Initializing NVMe Controllers 00:10:36.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.633 Controller IO queue size 128, less than required. 00:10:36.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:36.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:36.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:36.633 Initialization complete. Launching workers. 00:10:36.633 ======================================================== 00:10:36.633 Latency(us) 00:10:36.633 Device Information : IOPS MiB/s Average min max 00:10:36.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.29 0.08 942216.69 471.81 2002947.16 00:10:36.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.90 0.07 973022.35 315.78 2001023.27 00:10:36.633 ======================================================== 00:10:36.633 Total : 315.18 0.15 956867.57 315.78 2002947.16 00:10:36.633 00:10:36.633 [2024-09-30 22:39:03.509364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff06b0 (9): Bad file descriptor 00:10:36.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:36.633 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.633 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:36.633 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 505598 00:10:36.633 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 505598 00:10:37.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (505598) - No such process 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 505598 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 505598 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 505598 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 [2024-09-30 22:39:04.041130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=506418 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:37.204 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.204 [2024-09-30 22:39:04.129689] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:37.776 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.776 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:37.776 22:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:38.347 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.347 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:38.347 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:38.607 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.607 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:38.607 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.176 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.177 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:39.177 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.747 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.747 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:39.747 22:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:40.319 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:40.319 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:40.319 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:40.582 Initializing NVMe Controllers 00:10:40.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.582 Controller IO queue size 128, less than required. 00:10:40.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:40.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:40.582 Initialization complete. Launching workers. 00:10:40.582 ======================================================== 00:10:40.582 Latency(us) 00:10:40.582 Device Information : IOPS MiB/s Average min max 00:10:40.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004200.45 1000207.69 1043931.30 00:10:40.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003198.14 1000229.23 1008524.76 00:10:40.582 ======================================================== 00:10:40.582 Total : 256.00 0.12 1003699.30 1000207.69 1043931.30 00:10:40.582 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506418 00:10:40.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (506418) - No such process 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 506418 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:40.582 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.844 rmmod nvme_tcp 00:10:40.844 rmmod nvme_fabrics 00:10:40.844 rmmod nvme_keyring 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 505280 ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 505280 ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 505280' 00:10:40.844 killing process with pid 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 505280 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:40.844 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:10:41.105 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.105 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.105 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.105 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.105 22:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.020 00:10:43.020 real 0m18.554s 00:10:43.020 user 0m30.954s 00:10:43.020 sys 0m6.916s 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.020 ************************************ 00:10:43.020 END TEST nvmf_delete_subsystem 00:10:43.020 ************************************ 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.020 22:39:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.020 ************************************ 00:10:43.020 START TEST nvmf_host_management 00:10:43.020 ************************************ 00:10:43.020 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:43.284 * Looking for test storage... 00:10:43.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.284 --rc genhtml_branch_coverage=1 00:10:43.284 --rc genhtml_function_coverage=1 00:10:43.284 --rc genhtml_legend=1 00:10:43.284 --rc geninfo_all_blocks=1 00:10:43.284 --rc geninfo_unexecuted_blocks=1 00:10:43.284 00:10:43.284 ' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.284 --rc genhtml_branch_coverage=1 00:10:43.284 --rc genhtml_function_coverage=1 00:10:43.284 --rc genhtml_legend=1 00:10:43.284 --rc geninfo_all_blocks=1 00:10:43.284 --rc geninfo_unexecuted_blocks=1 00:10:43.284 00:10:43.284 ' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.284 --rc genhtml_branch_coverage=1 00:10:43.284 --rc genhtml_function_coverage=1 00:10:43.284 --rc genhtml_legend=1 00:10:43.284 --rc geninfo_all_blocks=1 00:10:43.284 --rc geninfo_unexecuted_blocks=1 00:10:43.284 00:10:43.284 ' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.284 --rc genhtml_branch_coverage=1 00:10:43.284 --rc genhtml_function_coverage=1 00:10:43.284 --rc genhtml_legend=1 00:10:43.284 --rc geninfo_all_blocks=1 00:10:43.284 --rc geninfo_unexecuted_blocks=1 00:10:43.284 00:10:43.284 ' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.284 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.285 22:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.426 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.427 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.427 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:10:51.427 00:10:51.427 --- 10.0.0.2 ping statistics --- 00:10:51.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.427 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:10:51.427 00:10:51.427 --- 10.0.0.1 ping statistics --- 00:10:51.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.427 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=511951 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 511951 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 511951 ']' 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.427 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:51.427 [2024-09-30 22:39:17.974636] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:10:51.427 [2024-09-30 22:39:17.974708] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.427 [2024-09-30 22:39:18.067461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.427 [2024-09-30 22:39:18.163433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.427 [2024-09-30 22:39:18.163498] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.427 [2024-09-30 22:39:18.163507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.427 [2024-09-30 22:39:18.163514] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.427 [2024-09-30 22:39:18.163521] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.427 [2024-09-30 22:39:18.163701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.427 [2024-09-30 22:39:18.163843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.427 [2024-09-30 22:39:18.164006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.427 [2024-09-30 22:39:18.164007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 [2024-09-30 22:39:18.839807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 Malloc0 00:10:52.000 [2024-09-30 22:39:18.909242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=512093 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 512093 /var/tmp/bdevperf.sock 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 512093 ']' 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:52.000 { 00:10:52.000 "params": { 00:10:52.000 "name": "Nvme$subsystem", 00:10:52.000 "trtype": "$TEST_TRANSPORT", 00:10:52.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.000 "adrfam": "ipv4", 00:10:52.000 "trsvcid": "$NVMF_PORT", 00:10:52.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.000 "hdgst": ${hdgst:-false}, 00:10:52.000 "ddgst": ${ddgst:-false} 00:10:52.000 }, 00:10:52.000 "method": "bdev_nvme_attach_controller" 00:10:52.000 } 00:10:52.000 EOF 00:10:52.000 )") 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:10:52.000 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:10:52.001 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:52.001 "params": { 00:10:52.001 "name": "Nvme0", 00:10:52.001 "trtype": "tcp", 00:10:52.001 "traddr": "10.0.0.2", 00:10:52.001 "adrfam": "ipv4", 00:10:52.001 "trsvcid": "4420", 00:10:52.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:52.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:52.001 "hdgst": false, 00:10:52.001 "ddgst": false 00:10:52.001 }, 00:10:52.001 "method": "bdev_nvme_attach_controller" 00:10:52.001 }' 00:10:52.264 [2024-09-30 22:39:19.020855] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:10:52.264 [2024-09-30 22:39:19.020935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512093 ] 00:10:52.264 [2024-09-30 22:39:19.105776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.264 [2024-09-30 22:39:19.203345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.526 Running I/O for 10 seconds... 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.099 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:53.099 [2024-09-30 22:39:19.928390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.099 [2024-09-30 22:39:19.928795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.099 [2024-09-30 22:39:19.928802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.928988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.928997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.100 [2024-09-30 22:39:19.929387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.100 [2024-09-30 22:39:19.929396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:53.101 [2024-09-30 22:39:19.929598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.101 [2024-09-30 22:39:19.929608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9df60 is same with the state(6) to be set 00:10:53.101 [2024-09-30 22:39:19.929678] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf9df60 was disconnected and freed. reset controller. 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.101 [2024-09-30 22:39:19.930957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:53.101 task offset: 89984 on job bdev=Nvme0n1 fails 00:10:53.101 00:10:53.101 Latency(us) 00:10:53.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.101 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:53.101 Job: Nvme0n1 ended in about 0.43 seconds with error 00:10:53.101 Verification LBA range: start 0x0 length 0x400 00:10:53.101 Nvme0n1 : 0.43 1471.88 91.99 147.19 0.00 38333.15 5870.93 34078.72 00:10:53.101 =================================================================================================================== 00:10:53.101 Total : 1471.88 91.99 147.19 0.00 38333.15 5870.93 34078.72 00:10:53.101 [2024-09-30 22:39:19.933214] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.101 [2024-09-30 22:39:19.933258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd852a0 (9): Bad file descriptor 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.101 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:53.101 [2024-09-30 22:39:20.036043] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 512093 00:10:54.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (512093) - No such process 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:54.042 { 00:10:54.042 "params": { 00:10:54.042 "name": "Nvme$subsystem", 00:10:54.042 "trtype": "$TEST_TRANSPORT", 00:10:54.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.042 "adrfam": "ipv4", 00:10:54.042 "trsvcid": "$NVMF_PORT", 00:10:54.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.042 "hdgst": ${hdgst:-false}, 00:10:54.042 "ddgst": ${ddgst:-false} 00:10:54.042 }, 00:10:54.042 "method": "bdev_nvme_attach_controller" 00:10:54.042 } 00:10:54.042 EOF 00:10:54.042 )") 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:10:54.042 22:39:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:54.042 "params": { 00:10:54.042 "name": "Nvme0", 00:10:54.042 "trtype": "tcp", 00:10:54.042 "traddr": "10.0.0.2", 00:10:54.042 "adrfam": "ipv4", 00:10:54.042 "trsvcid": "4420", 00:10:54.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:54.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:54.042 "hdgst": false, 00:10:54.042 "ddgst": false 00:10:54.042 }, 00:10:54.042 "method": "bdev_nvme_attach_controller" 00:10:54.042 }' 00:10:54.042 [2024-09-30 22:39:21.003754] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:10:54.043 [2024-09-30 22:39:21.003813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512580 ] 00:10:54.303 [2024-09-30 22:39:21.080372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.303 [2024-09-30 22:39:21.144426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.563 Running I/O for 1 seconds... 00:10:55.504 1553.00 IOPS, 97.06 MiB/s 00:10:55.504 Latency(us) 00:10:55.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:55.504 Verification LBA range: start 0x0 length 0x400 00:10:55.504 Nvme0n1 : 1.04 1607.95 100.50 0.00 0.00 39128.78 4123.31 34952.53 00:10:55.504 =================================================================================================================== 00:10:55.504 Total : 1607.95 100.50 0.00 0.00 39128.78 4123.31 34952.53 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:55.504 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.765 rmmod nvme_tcp 00:10:55.765 rmmod nvme_fabrics 00:10:55.765 rmmod nvme_keyring 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 511951 ']' 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 511951 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 511951 ']' 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 511951 00:10:55.765 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 511951 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 511951' 00:10:55.766 killing process with pid 511951 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 511951 00:10:55.766 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 511951 00:10:55.766 [2024-09-30 22:39:22.763782] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.026 22:39:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:57.937 00:10:57.937 real 0m14.848s 00:10:57.937 user 0m23.361s 00:10:57.937 sys 0m6.850s 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.937 ************************************ 00:10:57.937 END TEST nvmf_host_management 00:10:57.937 ************************************ 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.937 22:39:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.937 ************************************ 00:10:57.938 START TEST nvmf_lvol 00:10:57.938 ************************************ 00:10:57.938 22:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:58.200 * Looking for test storage... 00:10:58.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.200 --rc genhtml_branch_coverage=1 00:10:58.200 --rc genhtml_function_coverage=1 00:10:58.200 --rc genhtml_legend=1 00:10:58.200 --rc geninfo_all_blocks=1 00:10:58.200 --rc geninfo_unexecuted_blocks=1 00:10:58.200 00:10:58.200 ' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.200 --rc genhtml_branch_coverage=1 00:10:58.200 --rc genhtml_function_coverage=1 00:10:58.200 --rc genhtml_legend=1 00:10:58.200 --rc geninfo_all_blocks=1 00:10:58.200 --rc geninfo_unexecuted_blocks=1 00:10:58.200 00:10:58.200 ' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.200 --rc genhtml_branch_coverage=1 00:10:58.200 --rc genhtml_function_coverage=1 00:10:58.200 --rc genhtml_legend=1 00:10:58.200 --rc geninfo_all_blocks=1 00:10:58.200 --rc geninfo_unexecuted_blocks=1 00:10:58.200 00:10:58.200 ' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.200 --rc genhtml_branch_coverage=1 00:10:58.200 --rc genhtml_function_coverage=1 00:10:58.200 --rc genhtml_legend=1 00:10:58.200 --rc geninfo_all_blocks=1 00:10:58.200 --rc geninfo_unexecuted_blocks=1 00:10:58.200 00:10:58.200 ' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.200 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.201 22:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:06.345 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:06.346 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:06.346 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:06.346 Found net devices under 0000:31:00.0: cvl_0_0 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:06.346 Found net devices under 0000:31:00.1: cvl_0_1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:11:06.346 00:11:06.346 --- 10.0.0.2 ping statistics --- 00:11:06.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.346 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:11:06.346 00:11:06.346 --- 10.0.0.1 ping statistics --- 00:11:06.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.346 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.346 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=517187 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 517187 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 517187 ']' 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.347 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.347 [2024-09-30 22:39:32.905781] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:11:06.347 [2024-09-30 22:39:32.905846] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.347 [2024-09-30 22:39:32.996035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.347 [2024-09-30 22:39:33.093132] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.347 [2024-09-30 22:39:33.093198] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.347 [2024-09-30 22:39:33.093207] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.347 [2024-09-30 22:39:33.093215] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.347 [2024-09-30 22:39:33.093221] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.347 [2024-09-30 22:39:33.093415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.347 [2024-09-30 22:39:33.093558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.347 [2024-09-30 22:39:33.093558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:07.045 [2024-09-30 22:39:33.945705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.045 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.308 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:07.308 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.569 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:07.569 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:07.829 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:07.829 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=441413f1-36a4-41cb-adb8-7cce919daf21 00:11:08.090 22:39:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 441413f1-36a4-41cb-adb8-7cce919daf21 lvol 20 00:11:08.090 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5da084b1-20e1-4994-b0f9-082f53322568 00:11:08.090 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.349 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5da084b1-20e1-4994-b0f9-082f53322568 00:11:08.609 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:08.609 [2024-09-30 22:39:35.572458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.609 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.869 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=517828 00:11:08.869 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:08.869 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:09.808 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5da084b1-20e1-4994-b0f9-082f53322568 MY_SNAPSHOT 00:11:10.067 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=821f9158-0781-497b-a9b8-0ec341afda52 00:11:10.067 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5da084b1-20e1-4994-b0f9-082f53322568 30 00:11:10.327 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 821f9158-0781-497b-a9b8-0ec341afda52 MY_CLONE 00:11:10.587 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=84911afb-4f07-4a81-a44a-482a5b726843 00:11:10.587 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 84911afb-4f07-4a81-a44a-482a5b726843 00:11:10.846 22:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 517828 00:11:20.840 Initializing NVMe Controllers 00:11:20.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:20.840 Controller IO queue size 128, less than required. 00:11:20.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:20.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:20.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:20.840 Initialization complete. Launching workers. 00:11:20.840 ======================================================== 00:11:20.840 Latency(us) 00:11:20.840 Device Information : IOPS MiB/s Average min max 00:11:20.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17266.10 67.45 7413.86 380.01 62852.62 00:11:20.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16276.90 63.58 7864.79 3851.32 50760.21 00:11:20.840 ======================================================== 00:11:20.840 Total : 33543.00 131.03 7632.67 380.01 62852.62 00:11:20.840 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5da084b1-20e1-4994-b0f9-082f53322568 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 441413f1-36a4-41cb-adb8-7cce919daf21 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.840 rmmod nvme_tcp 00:11:20.840 rmmod nvme_fabrics 00:11:20.840 rmmod nvme_keyring 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 517187 ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 517187 ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 517187' 00:11:20.840 killing process with pid 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 517187 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:11:20.840 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.840 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.840 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.840 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.840 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.224 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.224 00:11:22.224 real 0m24.125s 00:11:22.224 user 1m4.831s 00:11:22.224 sys 0m8.726s 00:11:22.224 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.224 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.224 ************************************ 00:11:22.224 END TEST nvmf_lvol 00:11:22.224 ************************************ 00:11:22.225 22:39:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:22.225 22:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.225 22:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.225 22:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.225 ************************************ 00:11:22.225 START TEST nvmf_lvs_grow 00:11:22.225 ************************************ 00:11:22.225 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:22.485 * Looking for test storage... 00:11:22.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:22.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.485 --rc genhtml_branch_coverage=1 00:11:22.485 --rc genhtml_function_coverage=1 00:11:22.485 --rc genhtml_legend=1 00:11:22.485 --rc geninfo_all_blocks=1 00:11:22.485 --rc geninfo_unexecuted_blocks=1 00:11:22.485 00:11:22.485 ' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:22.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.485 --rc genhtml_branch_coverage=1 00:11:22.485 --rc genhtml_function_coverage=1 00:11:22.485 --rc genhtml_legend=1 00:11:22.485 --rc geninfo_all_blocks=1 00:11:22.485 --rc geninfo_unexecuted_blocks=1 00:11:22.485 00:11:22.485 ' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:22.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.485 --rc genhtml_branch_coverage=1 00:11:22.485 --rc genhtml_function_coverage=1 00:11:22.485 --rc genhtml_legend=1 00:11:22.485 --rc geninfo_all_blocks=1 00:11:22.485 --rc geninfo_unexecuted_blocks=1 00:11:22.485 00:11:22.485 ' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:22.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.485 --rc genhtml_branch_coverage=1 00:11:22.485 --rc genhtml_function_coverage=1 00:11:22.485 --rc genhtml_legend=1 00:11:22.485 --rc geninfo_all_blocks=1 00:11:22.485 --rc geninfo_unexecuted_blocks=1 00:11:22.485 00:11:22.485 ' 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:22.485 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.486 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:30.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:30.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:30.627 Found net devices under 0000:31:00.0: cvl_0_0 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:30.627 Found net devices under 0000:31:00.1: cvl_0_1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.627 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.628 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:11:30.628 00:11:30.628 --- 10.0.0.2 ping statistics --- 00:11:30.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.628 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:11:30.628 00:11:30.628 --- 10.0.0.1 ping statistics --- 00:11:30.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.628 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=524404 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 524404 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 524404 ']' 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.628 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:30.628 [2024-09-30 22:39:57.126052] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:11:30.628 [2024-09-30 22:39:57.126121] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.628 [2024-09-30 22:39:57.219313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.628 [2024-09-30 22:39:57.316459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.628 [2024-09-30 22:39:57.316524] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.628 [2024-09-30 22:39:57.316533] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.628 [2024-09-30 22:39:57.316540] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.628 [2024-09-30 22:39:57.316547] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.628 [2024-09-30 22:39:57.316578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.200 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:31.200 [2024-09-30 22:39:58.163786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.200 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:31.200 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:31.200 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.200 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.461 ************************************ 00:11:31.461 START TEST lvs_grow_clean 00:11:31.461 ************************************ 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:31.461 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:31.462 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:31.462 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:31.723 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:31.723 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:31.723 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:31.985 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:31.985 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:31.985 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 lvol 150 00:11:32.245 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=52276997-dabb-4d4d-b0e7-d99beb339639 00:11:32.245 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.245 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:32.245 [2024-09-30 22:39:59.193322] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:32.245 [2024-09-30 22:39:59.193398] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:32.245 true 00:11:32.245 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:32.245 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:32.506 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:32.507 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:32.768 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52276997-dabb-4d4d-b0e7-d99beb339639 00:11:32.768 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:33.030 [2024-09-30 22:39:59.887565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.030 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=524975 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 524975 /var/tmp/bdevperf.sock 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 524975 ']' 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:33.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.291 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:33.291 [2024-09-30 22:40:00.156981] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:11:33.291 [2024-09-30 22:40:00.157055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524975 ] 00:11:33.291 [2024-09-30 22:40:00.240324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.552 [2024-09-30 22:40:00.335279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.125 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.125 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:11:34.125 22:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:34.386 Nvme0n1 00:11:34.386 22:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:34.647 [ 00:11:34.647 { 00:11:34.647 "name": "Nvme0n1", 00:11:34.647 "aliases": [ 00:11:34.647 "52276997-dabb-4d4d-b0e7-d99beb339639" 00:11:34.647 ], 00:11:34.647 "product_name": "NVMe disk", 00:11:34.647 "block_size": 4096, 00:11:34.647 "num_blocks": 38912, 00:11:34.647 "uuid": "52276997-dabb-4d4d-b0e7-d99beb339639", 00:11:34.647 "numa_id": 0, 00:11:34.647 "assigned_rate_limits": { 00:11:34.647 "rw_ios_per_sec": 0, 00:11:34.647 "rw_mbytes_per_sec": 0, 00:11:34.647 "r_mbytes_per_sec": 0, 00:11:34.647 "w_mbytes_per_sec": 0 00:11:34.647 }, 00:11:34.647 "claimed": false, 00:11:34.647 "zoned": false, 00:11:34.647 "supported_io_types": { 00:11:34.647 "read": true, 00:11:34.647 "write": true, 00:11:34.647 "unmap": true, 00:11:34.647 "flush": true, 00:11:34.647 "reset": true, 00:11:34.647 "nvme_admin": true, 00:11:34.647 "nvme_io": true, 00:11:34.647 "nvme_io_md": false, 00:11:34.647 "write_zeroes": true, 00:11:34.647 "zcopy": false, 00:11:34.647 "get_zone_info": false, 00:11:34.647 "zone_management": false, 00:11:34.647 "zone_append": false, 00:11:34.647 "compare": true, 00:11:34.647 "compare_and_write": true, 00:11:34.647 "abort": true, 00:11:34.647 "seek_hole": false, 00:11:34.647 "seek_data": false, 00:11:34.647 "copy": true, 00:11:34.647 "nvme_iov_md": false 00:11:34.647 }, 00:11:34.647 "memory_domains": [ 00:11:34.647 { 00:11:34.647 "dma_device_id": "system", 00:11:34.647 "dma_device_type": 1 00:11:34.647 } 00:11:34.647 ], 00:11:34.647 "driver_specific": { 00:11:34.647 "nvme": [ 00:11:34.647 { 00:11:34.647 "trid": { 00:11:34.647 "trtype": "TCP", 00:11:34.647 "adrfam": "IPv4", 00:11:34.647 "traddr": "10.0.0.2", 00:11:34.647 "trsvcid": "4420", 00:11:34.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:34.647 }, 00:11:34.647 "ctrlr_data": { 00:11:34.647 "cntlid": 1, 00:11:34.647 "vendor_id": "0x8086", 00:11:34.647 "model_number": "SPDK bdev Controller", 00:11:34.647 "serial_number": "SPDK0", 00:11:34.647 "firmware_revision": "25.01", 00:11:34.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:34.647 "oacs": { 00:11:34.647 "security": 0, 00:11:34.647 "format": 0, 00:11:34.647 "firmware": 0, 00:11:34.647 "ns_manage": 0 00:11:34.647 }, 00:11:34.647 "multi_ctrlr": true, 00:11:34.647 "ana_reporting": false 00:11:34.647 }, 00:11:34.647 "vs": { 00:11:34.647 "nvme_version": "1.3" 00:11:34.647 }, 00:11:34.647 "ns_data": { 00:11:34.647 "id": 1, 00:11:34.647 "can_share": true 00:11:34.647 } 00:11:34.647 } 00:11:34.647 ], 00:11:34.647 "mp_policy": "active_passive" 00:11:34.647 } 00:11:34.647 } 00:11:34.647 ] 00:11:34.647 22:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=525317 00:11:34.647 22:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:34.647 22:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:34.647 Running I/O for 10 seconds... 00:11:35.591 Latency(us) 00:11:35.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.591 Nvme0n1 : 1.00 25024.00 97.75 0.00 0.00 0.00 0.00 0.00 00:11:35.591 =================================================================================================================== 00:11:35.591 Total : 25024.00 97.75 0.00 0.00 0.00 0.00 0.00 00:11:35.591 00:11:36.535 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:36.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.535 Nvme0n1 : 2.00 25185.50 98.38 0.00 0.00 0.00 0.00 0.00 00:11:36.535 =================================================================================================================== 00:11:36.535 Total : 25185.50 98.38 0.00 0.00 0.00 0.00 0.00 00:11:36.535 00:11:36.795 true 00:11:36.795 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:36.795 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:37.056 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:37.056 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:37.056 22:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 525317 00:11:37.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.626 Nvme0n1 : 3.00 25264.33 98.69 0.00 0.00 0.00 0.00 0.00 00:11:37.626 =================================================================================================================== 00:11:37.626 Total : 25264.33 98.69 0.00 0.00 0.00 0.00 0.00 00:11:37.626 00:11:38.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.566 Nvme0n1 : 4.00 25312.25 98.88 0.00 0.00 0.00 0.00 0.00 00:11:38.566 =================================================================================================================== 00:11:38.566 Total : 25312.25 98.88 0.00 0.00 0.00 0.00 0.00 00:11:38.566 00:11:39.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.947 Nvme0n1 : 5.00 25344.00 99.00 0.00 0.00 0.00 0.00 0.00 00:11:39.947 =================================================================================================================== 00:11:39.947 Total : 25344.00 99.00 0.00 0.00 0.00 0.00 0.00 00:11:39.947 00:11:40.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.889 Nvme0n1 : 6.00 25368.17 99.09 0.00 0.00 0.00 0.00 0.00 00:11:40.889 =================================================================================================================== 00:11:40.889 Total : 25368.17 99.09 0.00 0.00 0.00 0.00 0.00 00:11:40.889 00:11:41.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.829 Nvme0n1 : 7.00 25398.29 99.21 0.00 0.00 0.00 0.00 0.00 00:11:41.829 =================================================================================================================== 00:11:41.829 Total : 25398.29 99.21 0.00 0.00 0.00 0.00 0.00 00:11:41.829 00:11:42.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.769 Nvme0n1 : 8.00 25407.25 99.25 0.00 0.00 0.00 0.00 0.00 00:11:42.769 =================================================================================================================== 00:11:42.769 Total : 25407.25 99.25 0.00 0.00 0.00 0.00 0.00 00:11:42.769 00:11:43.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.714 Nvme0n1 : 9.00 25421.44 99.30 0.00 0.00 0.00 0.00 0.00 00:11:43.714 =================================================================================================================== 00:11:43.714 Total : 25421.44 99.30 0.00 0.00 0.00 0.00 0.00 00:11:43.714 00:11:44.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.654 Nvme0n1 : 10.00 25432.80 99.35 0.00 0.00 0.00 0.00 0.00 00:11:44.654 =================================================================================================================== 00:11:44.654 Total : 25432.80 99.35 0.00 0.00 0.00 0.00 0.00 00:11:44.654 00:11:44.654 00:11:44.654 Latency(us) 00:11:44.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.654 Nvme0n1 : 10.00 25431.54 99.34 0.00 0.00 5029.37 2498.56 10376.53 00:11:44.654 =================================================================================================================== 00:11:44.654 Total : 25431.54 99.34 0.00 0.00 5029.37 2498.56 10376.53 00:11:44.654 { 00:11:44.654 "results": [ 00:11:44.654 { 00:11:44.654 "job": "Nvme0n1", 00:11:44.654 "core_mask": "0x2", 00:11:44.654 "workload": "randwrite", 00:11:44.654 "status": "finished", 00:11:44.654 "queue_depth": 128, 00:11:44.654 "io_size": 4096, 00:11:44.654 "runtime": 10.00305, 00:11:44.654 "iops": 25431.543379269322, 00:11:44.654 "mibps": 99.34196632527079, 00:11:44.654 "io_failed": 0, 00:11:44.654 "io_timeout": 0, 00:11:44.654 "avg_latency_us": 5029.368166157611, 00:11:44.654 "min_latency_us": 2498.56, 00:11:44.654 "max_latency_us": 10376.533333333333 00:11:44.654 } 00:11:44.654 ], 00:11:44.654 "core_count": 1 00:11:44.654 } 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 524975 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 524975 ']' 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 524975 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 524975 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 524975' 00:11:44.654 killing process with pid 524975 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 524975 00:11:44.654 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.654 00:11:44.654 Latency(us) 00:11:44.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.654 =================================================================================================================== 00:11:44.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.654 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 524975 00:11:44.915 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.175 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.175 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:45.175 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:45.436 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:45.436 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:45.436 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:45.436 [2024-09-30 22:40:12.452435] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:45.696 request: 00:11:45.696 { 00:11:45.696 "uuid": "3fccaa22-5ed2-4401-a3ff-8f6879f21862", 00:11:45.696 "method": "bdev_lvol_get_lvstores", 00:11:45.696 "req_id": 1 00:11:45.696 } 00:11:45.696 Got JSON-RPC error response 00:11:45.696 response: 00:11:45.696 { 00:11:45.696 "code": -19, 00:11:45.696 "message": "No such device" 00:11:45.696 } 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:45.696 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:45.957 aio_bdev 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52276997-dabb-4d4d-b0e7-d99beb339639 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=52276997-dabb-4d4d-b0e7-d99beb339639 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.957 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:46.217 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52276997-dabb-4d4d-b0e7-d99beb339639 -t 2000 00:11:46.217 [ 00:11:46.217 { 00:11:46.217 "name": "52276997-dabb-4d4d-b0e7-d99beb339639", 00:11:46.217 "aliases": [ 00:11:46.217 "lvs/lvol" 00:11:46.217 ], 00:11:46.217 "product_name": "Logical Volume", 00:11:46.217 "block_size": 4096, 00:11:46.217 "num_blocks": 38912, 00:11:46.217 "uuid": "52276997-dabb-4d4d-b0e7-d99beb339639", 00:11:46.217 "assigned_rate_limits": { 00:11:46.217 "rw_ios_per_sec": 0, 00:11:46.217 "rw_mbytes_per_sec": 0, 00:11:46.217 "r_mbytes_per_sec": 0, 00:11:46.217 "w_mbytes_per_sec": 0 00:11:46.217 }, 00:11:46.217 "claimed": false, 00:11:46.217 "zoned": false, 00:11:46.217 "supported_io_types": { 00:11:46.217 "read": true, 00:11:46.217 "write": true, 00:11:46.217 "unmap": true, 00:11:46.218 "flush": false, 00:11:46.218 "reset": true, 00:11:46.218 "nvme_admin": false, 00:11:46.218 "nvme_io": false, 00:11:46.218 "nvme_io_md": false, 00:11:46.218 "write_zeroes": true, 00:11:46.218 "zcopy": false, 00:11:46.218 "get_zone_info": false, 00:11:46.218 "zone_management": false, 00:11:46.218 "zone_append": false, 00:11:46.218 "compare": false, 00:11:46.218 "compare_and_write": false, 00:11:46.218 "abort": false, 00:11:46.218 "seek_hole": true, 00:11:46.218 "seek_data": true, 00:11:46.218 "copy": false, 00:11:46.218 "nvme_iov_md": false 00:11:46.218 }, 00:11:46.218 "driver_specific": { 00:11:46.218 "lvol": { 00:11:46.218 "lvol_store_uuid": "3fccaa22-5ed2-4401-a3ff-8f6879f21862", 00:11:46.218 "base_bdev": "aio_bdev", 00:11:46.218 "thin_provision": false, 00:11:46.218 "num_allocated_clusters": 38, 00:11:46.218 "snapshot": false, 00:11:46.218 "clone": false, 00:11:46.218 "esnap_clone": false 00:11:46.218 } 00:11:46.218 } 00:11:46.218 } 00:11:46.218 ] 00:11:46.218 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:46.218 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:46.218 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:46.478 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:46.478 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:46.478 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:46.739 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:46.739 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52276997-dabb-4d4d-b0e7-d99beb339639 00:11:46.739 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fccaa22-5ed2-4401-a3ff-8f6879f21862 00:11:46.999 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.260 00:11:47.260 real 0m15.892s 00:11:47.260 user 0m15.579s 00:11:47.260 sys 0m1.451s 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:47.260 ************************************ 00:11:47.260 END TEST lvs_grow_clean 00:11:47.260 ************************************ 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.260 ************************************ 00:11:47.260 START TEST lvs_grow_dirty 00:11:47.260 ************************************ 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.260 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:47.521 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:47.521 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:47.780 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 lvol 150 00:11:48.038 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c3166f8-5666-402c-8a8b-88c1efb74d61 00:11:48.038 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.038 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:48.298 [2024-09-30 22:40:15.107125] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:48.298 [2024-09-30 22:40:15.107168] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:48.298 true 00:11:48.298 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:11:48.298 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:48.298 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:48.298 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:48.556 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c3166f8-5666-402c-8a8b-88c1efb74d61 00:11:48.815 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:48.815 [2024-09-30 22:40:15.785099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.816 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=528284 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 528284 /var/tmp/bdevperf.sock 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 528284 ']' 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:49.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.075 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:49.075 [2024-09-30 22:40:16.034797] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:11:49.075 [2024-09-30 22:40:16.034853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528284 ] 00:11:49.334 [2024-09-30 22:40:16.111304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.334 [2024-09-30 22:40:16.164883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.902 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.902 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:49.902 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:50.162 Nvme0n1 00:11:50.162 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:50.424 [ 00:11:50.424 { 00:11:50.424 "name": "Nvme0n1", 00:11:50.424 "aliases": [ 00:11:50.424 "8c3166f8-5666-402c-8a8b-88c1efb74d61" 00:11:50.424 ], 00:11:50.424 "product_name": "NVMe disk", 00:11:50.424 "block_size": 4096, 00:11:50.424 "num_blocks": 38912, 00:11:50.424 "uuid": "8c3166f8-5666-402c-8a8b-88c1efb74d61", 00:11:50.424 "numa_id": 0, 00:11:50.424 "assigned_rate_limits": { 00:11:50.424 "rw_ios_per_sec": 0, 00:11:50.424 "rw_mbytes_per_sec": 0, 00:11:50.424 "r_mbytes_per_sec": 0, 00:11:50.424 "w_mbytes_per_sec": 0 00:11:50.424 }, 00:11:50.424 "claimed": false, 00:11:50.424 "zoned": false, 00:11:50.424 "supported_io_types": { 00:11:50.424 "read": true, 00:11:50.424 "write": true, 00:11:50.424 "unmap": true, 00:11:50.424 "flush": true, 00:11:50.424 "reset": true, 00:11:50.424 "nvme_admin": true, 00:11:50.424 "nvme_io": true, 00:11:50.424 "nvme_io_md": false, 00:11:50.424 "write_zeroes": true, 00:11:50.424 "zcopy": false, 00:11:50.424 "get_zone_info": false, 00:11:50.424 "zone_management": false, 00:11:50.424 "zone_append": false, 00:11:50.424 "compare": true, 00:11:50.424 "compare_and_write": true, 00:11:50.424 "abort": true, 00:11:50.424 "seek_hole": false, 00:11:50.424 "seek_data": false, 00:11:50.424 "copy": true, 00:11:50.424 "nvme_iov_md": false 00:11:50.424 }, 00:11:50.424 "memory_domains": [ 00:11:50.424 { 00:11:50.424 "dma_device_id": "system", 00:11:50.424 "dma_device_type": 1 00:11:50.424 } 00:11:50.424 ], 00:11:50.424 "driver_specific": { 00:11:50.424 "nvme": [ 00:11:50.424 { 00:11:50.424 "trid": { 00:11:50.424 "trtype": "TCP", 00:11:50.424 "adrfam": "IPv4", 00:11:50.424 "traddr": "10.0.0.2", 00:11:50.424 "trsvcid": "4420", 00:11:50.424 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:50.424 }, 00:11:50.424 "ctrlr_data": { 00:11:50.424 "cntlid": 1, 00:11:50.424 "vendor_id": "0x8086", 00:11:50.424 "model_number": "SPDK bdev Controller", 00:11:50.424 "serial_number": "SPDK0", 00:11:50.424 "firmware_revision": "25.01", 00:11:50.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:50.424 "oacs": { 00:11:50.424 "security": 0, 00:11:50.424 "format": 0, 00:11:50.424 "firmware": 0, 00:11:50.424 "ns_manage": 0 00:11:50.424 }, 00:11:50.424 "multi_ctrlr": true, 00:11:50.424 "ana_reporting": false 00:11:50.424 }, 00:11:50.424 "vs": { 00:11:50.424 "nvme_version": "1.3" 00:11:50.424 }, 00:11:50.424 "ns_data": { 00:11:50.424 "id": 1, 00:11:50.424 "can_share": true 00:11:50.424 } 00:11:50.424 } 00:11:50.424 ], 00:11:50.424 "mp_policy": "active_passive" 00:11:50.424 } 00:11:50.424 } 00:11:50.424 ] 00:11:50.424 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=528440 00:11:50.424 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:50.424 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:50.424 Running I/O for 10 seconds... 00:11:51.811 Latency(us) 00:11:51.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.811 Nvme0n1 : 1.00 24920.00 97.34 0.00 0.00 0.00 0.00 0.00 00:11:51.811 =================================================================================================================== 00:11:51.811 Total : 24920.00 97.34 0.00 0.00 0.00 0.00 0.00 00:11:51.811 00:11:52.382 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:11:52.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.643 Nvme0n1 : 2.00 25131.50 98.17 0.00 0.00 0.00 0.00 0.00 00:11:52.643 =================================================================================================================== 00:11:52.643 Total : 25131.50 98.17 0.00 0.00 0.00 0.00 0.00 00:11:52.643 00:11:52.643 true 00:11:52.643 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:11:52.643 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:52.903 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:52.903 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:52.903 22:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 528440 00:11:53.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.473 Nvme0n1 : 3.00 25200.00 98.44 0.00 0.00 0.00 0.00 0.00 00:11:53.473 =================================================================================================================== 00:11:53.473 Total : 25200.00 98.44 0.00 0.00 0.00 0.00 0.00 00:11:53.473 00:11:54.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.416 Nvme0n1 : 4.00 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:11:54.416 =================================================================================================================== 00:11:54.416 Total : 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:11:54.416 00:11:55.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.800 Nvme0n1 : 5.00 25296.00 98.81 0.00 0.00 0.00 0.00 0.00 00:11:55.800 =================================================================================================================== 00:11:55.800 Total : 25296.00 98.81 0.00 0.00 0.00 0.00 0.00 00:11:55.800 00:11:56.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.740 Nvme0n1 : 6.00 25335.67 98.97 0.00 0.00 0.00 0.00 0.00 00:11:56.740 =================================================================================================================== 00:11:56.740 Total : 25335.67 98.97 0.00 0.00 0.00 0.00 0.00 00:11:56.740 00:11:57.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.680 Nvme0n1 : 7.00 25357.14 99.05 0.00 0.00 0.00 0.00 0.00 00:11:57.680 =================================================================================================================== 00:11:57.680 Total : 25357.14 99.05 0.00 0.00 0.00 0.00 0.00 00:11:57.680 00:11:58.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.622 Nvme0n1 : 8.00 25384.88 99.16 0.00 0.00 0.00 0.00 0.00 00:11:58.622 =================================================================================================================== 00:11:58.622 Total : 25384.88 99.16 0.00 0.00 0.00 0.00 0.00 00:11:58.622 00:11:59.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.565 Nvme0n1 : 9.00 25394.56 99.20 0.00 0.00 0.00 0.00 0.00 00:11:59.565 =================================================================================================================== 00:11:59.565 Total : 25394.56 99.20 0.00 0.00 0.00 0.00 0.00 00:11:59.565 00:12:00.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.637 Nvme0n1 : 10.00 25414.70 99.28 0.00 0.00 0.00 0.00 0.00 00:12:00.637 =================================================================================================================== 00:12:00.637 Total : 25414.70 99.28 0.00 0.00 0.00 0.00 0.00 00:12:00.637 00:12:00.637 00:12:00.637 Latency(us) 00:12:00.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.637 Nvme0n1 : 10.00 25414.07 99.27 0.00 0.00 5033.57 3085.65 15400.96 00:12:00.637 =================================================================================================================== 00:12:00.637 Total : 25414.07 99.27 0.00 0.00 5033.57 3085.65 15400.96 00:12:00.637 { 00:12:00.637 "results": [ 00:12:00.637 { 00:12:00.637 "job": "Nvme0n1", 00:12:00.637 "core_mask": "0x2", 00:12:00.637 "workload": "randwrite", 00:12:00.637 "status": "finished", 00:12:00.637 "queue_depth": 128, 00:12:00.637 "io_size": 4096, 00:12:00.637 "runtime": 10.003473, 00:12:00.637 "iops": 25414.073692206697, 00:12:00.637 "mibps": 99.27372536018241, 00:12:00.637 "io_failed": 0, 00:12:00.637 "io_timeout": 0, 00:12:00.637 "avg_latency_us": 5033.5658378338685, 00:12:00.637 "min_latency_us": 3085.653333333333, 00:12:00.637 "max_latency_us": 15400.96 00:12:00.637 } 00:12:00.637 ], 00:12:00.637 "core_count": 1 00:12:00.637 } 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 528284 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 528284 ']' 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 528284 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 528284 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 528284' 00:12:00.637 killing process with pid 528284 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 528284 00:12:00.637 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.637 00:12:00.637 Latency(us) 00:12:00.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.637 =================================================================================================================== 00:12:00.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 528284 00:12:00.637 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.907 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 524404 00:12:01.169 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 524404 00:12:01.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 524404 Killed "${NVMF_APP[@]}" "$@" 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=530766 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 530766 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 530766 ']' 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.430 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.430 [2024-09-30 22:40:28.280392] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:01.430 [2024-09-30 22:40:28.280450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.430 [2024-09-30 22:40:28.363283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.430 [2024-09-30 22:40:28.417733] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.430 [2024-09-30 22:40:28.417764] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.430 [2024-09-30 22:40:28.417770] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.430 [2024-09-30 22:40:28.417775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.430 [2024-09-30 22:40:28.417779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.430 [2024-09-30 22:40:28.417799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:02.372 [2024-09-30 22:40:29.251628] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:02.372 [2024-09-30 22:40:29.251700] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:02.372 [2024-09-30 22:40:29.251722] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8c3166f8-5666-402c-8a8b-88c1efb74d61 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8c3166f8-5666-402c-8a8b-88c1efb74d61 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.372 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:02.633 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c3166f8-5666-402c-8a8b-88c1efb74d61 -t 2000 00:12:02.633 [ 00:12:02.633 { 00:12:02.633 "name": "8c3166f8-5666-402c-8a8b-88c1efb74d61", 00:12:02.633 "aliases": [ 00:12:02.633 "lvs/lvol" 00:12:02.633 ], 00:12:02.633 "product_name": "Logical Volume", 00:12:02.633 "block_size": 4096, 00:12:02.633 "num_blocks": 38912, 00:12:02.633 "uuid": "8c3166f8-5666-402c-8a8b-88c1efb74d61", 00:12:02.633 "assigned_rate_limits": { 00:12:02.633 "rw_ios_per_sec": 0, 00:12:02.633 "rw_mbytes_per_sec": 0, 00:12:02.633 "r_mbytes_per_sec": 0, 00:12:02.633 "w_mbytes_per_sec": 0 00:12:02.633 }, 00:12:02.633 "claimed": false, 00:12:02.633 "zoned": false, 00:12:02.633 "supported_io_types": { 00:12:02.633 "read": true, 00:12:02.633 "write": true, 00:12:02.633 "unmap": true, 00:12:02.633 "flush": false, 00:12:02.633 "reset": true, 00:12:02.633 "nvme_admin": false, 00:12:02.633 "nvme_io": false, 00:12:02.633 "nvme_io_md": false, 00:12:02.633 "write_zeroes": true, 00:12:02.633 "zcopy": false, 00:12:02.633 "get_zone_info": false, 00:12:02.633 "zone_management": false, 00:12:02.633 "zone_append": false, 00:12:02.633 "compare": false, 00:12:02.633 "compare_and_write": false, 00:12:02.633 "abort": false, 00:12:02.633 "seek_hole": true, 00:12:02.633 "seek_data": true, 00:12:02.633 "copy": false, 00:12:02.633 "nvme_iov_md": false 00:12:02.633 }, 00:12:02.633 "driver_specific": { 00:12:02.633 "lvol": { 00:12:02.633 "lvol_store_uuid": "cb01ccbb-9ec8-4eae-a01f-7f7a105a9325", 00:12:02.633 "base_bdev": "aio_bdev", 00:12:02.633 "thin_provision": false, 00:12:02.633 "num_allocated_clusters": 38, 00:12:02.633 "snapshot": false, 00:12:02.633 "clone": false, 00:12:02.633 "esnap_clone": false 00:12:02.633 } 00:12:02.633 } 00:12:02.633 } 00:12:02.633 ] 00:12:02.634 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:02.634 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:02.634 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:02.894 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:02.894 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:02.894 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:03.155 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:03.155 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.155 [2024-09-30 22:40:30.080251] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:03.155 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:03.416 request: 00:12:03.416 { 00:12:03.416 "uuid": "cb01ccbb-9ec8-4eae-a01f-7f7a105a9325", 00:12:03.416 "method": "bdev_lvol_get_lvstores", 00:12:03.416 "req_id": 1 00:12:03.416 } 00:12:03.416 Got JSON-RPC error response 00:12:03.416 response: 00:12:03.416 { 00:12:03.416 "code": -19, 00:12:03.416 "message": "No such device" 00:12:03.416 } 00:12:03.416 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:03.416 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.416 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:03.416 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.416 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:03.736 aio_bdev 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8c3166f8-5666-402c-8a8b-88c1efb74d61 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8c3166f8-5666-402c-8a8b-88c1efb74d61 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:03.736 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c3166f8-5666-402c-8a8b-88c1efb74d61 -t 2000 00:12:03.998 [ 00:12:03.998 { 00:12:03.998 "name": "8c3166f8-5666-402c-8a8b-88c1efb74d61", 00:12:03.998 "aliases": [ 00:12:03.998 "lvs/lvol" 00:12:03.998 ], 00:12:03.998 "product_name": "Logical Volume", 00:12:03.998 "block_size": 4096, 00:12:03.998 "num_blocks": 38912, 00:12:03.998 "uuid": "8c3166f8-5666-402c-8a8b-88c1efb74d61", 00:12:03.998 "assigned_rate_limits": { 00:12:03.998 "rw_ios_per_sec": 0, 00:12:03.998 "rw_mbytes_per_sec": 0, 00:12:03.998 "r_mbytes_per_sec": 0, 00:12:03.998 "w_mbytes_per_sec": 0 00:12:03.998 }, 00:12:03.998 "claimed": false, 00:12:03.998 "zoned": false, 00:12:03.998 "supported_io_types": { 00:12:03.998 "read": true, 00:12:03.998 "write": true, 00:12:03.998 "unmap": true, 00:12:03.998 "flush": false, 00:12:03.998 "reset": true, 00:12:03.998 "nvme_admin": false, 00:12:03.998 "nvme_io": false, 00:12:03.998 "nvme_io_md": false, 00:12:03.998 "write_zeroes": true, 00:12:03.998 "zcopy": false, 00:12:03.998 "get_zone_info": false, 00:12:03.998 "zone_management": false, 00:12:03.998 "zone_append": false, 00:12:03.998 "compare": false, 00:12:03.998 "compare_and_write": false, 00:12:03.998 "abort": false, 00:12:03.998 "seek_hole": true, 00:12:03.998 "seek_data": true, 00:12:03.998 "copy": false, 00:12:03.998 "nvme_iov_md": false 00:12:03.998 }, 00:12:03.998 "driver_specific": { 00:12:03.998 "lvol": { 00:12:03.998 "lvol_store_uuid": "cb01ccbb-9ec8-4eae-a01f-7f7a105a9325", 00:12:03.998 "base_bdev": "aio_bdev", 00:12:03.998 "thin_provision": false, 00:12:03.998 "num_allocated_clusters": 38, 00:12:03.998 "snapshot": false, 00:12:03.998 "clone": false, 00:12:03.998 "esnap_clone": false 00:12:03.998 } 00:12:03.998 } 00:12:03.998 } 00:12:03.998 ] 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:03.998 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:04.260 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:04.260 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c3166f8-5666-402c-8a8b-88c1efb74d61 00:12:04.522 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb01ccbb-9ec8-4eae-a01f-7f7a105a9325 00:12:04.522 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:04.783 00:12:04.783 real 0m17.502s 00:12:04.783 user 0m45.769s 00:12:04.783 sys 0m3.079s 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 ************************************ 00:12:04.783 END TEST lvs_grow_dirty 00:12:04.783 ************************************ 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:12:04.783 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:04.783 nvmf_trace.0 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.044 rmmod nvme_tcp 00:12:05.044 rmmod nvme_fabrics 00:12:05.044 rmmod nvme_keyring 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 530766 ']' 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 530766 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 530766 ']' 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 530766 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 530766 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 530766' 00:12:05.044 killing process with pid 530766 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 530766 00:12:05.044 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 530766 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.306 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.307 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.307 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.307 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.221 00:12:07.221 real 0m44.998s 00:12:07.221 user 1m7.715s 00:12:07.221 sys 0m10.832s 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 ************************************ 00:12:07.221 END TEST nvmf_lvs_grow 00:12:07.221 ************************************ 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.221 22:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.483 ************************************ 00:12:07.483 START TEST nvmf_bdev_io_wait 00:12:07.483 ************************************ 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:07.483 * Looking for test storage... 00:12:07.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.483 --rc genhtml_branch_coverage=1 00:12:07.483 --rc genhtml_function_coverage=1 00:12:07.483 --rc genhtml_legend=1 00:12:07.483 --rc geninfo_all_blocks=1 00:12:07.483 --rc geninfo_unexecuted_blocks=1 00:12:07.483 00:12:07.483 ' 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.483 --rc genhtml_branch_coverage=1 00:12:07.483 --rc genhtml_function_coverage=1 00:12:07.483 --rc genhtml_legend=1 00:12:07.483 --rc geninfo_all_blocks=1 00:12:07.483 --rc geninfo_unexecuted_blocks=1 00:12:07.483 00:12:07.483 ' 00:12:07.483 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.483 --rc genhtml_branch_coverage=1 00:12:07.483 --rc genhtml_function_coverage=1 00:12:07.484 --rc genhtml_legend=1 00:12:07.484 --rc geninfo_all_blocks=1 00:12:07.484 --rc geninfo_unexecuted_blocks=1 00:12:07.484 00:12:07.484 ' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:07.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.484 --rc genhtml_branch_coverage=1 00:12:07.484 --rc genhtml_function_coverage=1 00:12:07.484 --rc genhtml_legend=1 00:12:07.484 --rc geninfo_all_blocks=1 00:12:07.484 --rc geninfo_unexecuted_blocks=1 00:12:07.484 00:12:07.484 ' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.484 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.628 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:15.629 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:15.629 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:15.629 Found net devices under 0000:31:00.0: cvl_0_0 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:15.629 Found net devices under 0000:31:00.1: cvl_0_1 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.629 22:40:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.629 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.629 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.629 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.629 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.629 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:12:15.630 00:12:15.630 --- 10.0.0.2 ping statistics --- 00:12:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.630 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:12:15.630 00:12:15.630 --- 10.0.0.1 ping statistics --- 00:12:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.630 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=535901 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 535901 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 535901 ']' 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.630 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.630 [2024-09-30 22:40:42.365750] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:15.630 [2024-09-30 22:40:42.365814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.630 [2024-09-30 22:40:42.457105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.630 [2024-09-30 22:40:42.554787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.630 [2024-09-30 22:40:42.554855] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.630 [2024-09-30 22:40:42.554865] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.630 [2024-09-30 22:40:42.554872] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.630 [2024-09-30 22:40:42.554879] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.630 [2024-09-30 22:40:42.555057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.630 [2024-09-30 22:40:42.555202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.630 [2024-09-30 22:40:42.555347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.630 [2024-09-30 22:40:42.555347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.216 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.216 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:12:16.216 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:16.216 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.216 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 [2024-09-30 22:40:43.318739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 Malloc0 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.478 [2024-09-30 22:40:43.401265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=536103 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=536106 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:16.478 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:16.479 { 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme$subsystem", 00:12:16.479 "trtype": "$TEST_TRANSPORT", 00:12:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "$NVMF_PORT", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.479 "hdgst": ${hdgst:-false}, 00:12:16.479 "ddgst": ${ddgst:-false} 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 } 00:12:16.479 EOF 00:12:16.479 )") 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=536109 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:16.479 { 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme$subsystem", 00:12:16.479 "trtype": "$TEST_TRANSPORT", 00:12:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "$NVMF_PORT", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.479 "hdgst": ${hdgst:-false}, 00:12:16.479 "ddgst": ${ddgst:-false} 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 } 00:12:16.479 EOF 00:12:16.479 )") 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=536113 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:16.479 { 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme$subsystem", 00:12:16.479 "trtype": "$TEST_TRANSPORT", 00:12:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "$NVMF_PORT", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.479 "hdgst": ${hdgst:-false}, 00:12:16.479 "ddgst": ${ddgst:-false} 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 } 00:12:16.479 EOF 00:12:16.479 )") 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:16.479 { 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme$subsystem", 00:12:16.479 "trtype": "$TEST_TRANSPORT", 00:12:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "$NVMF_PORT", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.479 "hdgst": ${hdgst:-false}, 00:12:16.479 "ddgst": ${ddgst:-false} 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 } 00:12:16.479 EOF 00:12:16.479 )") 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 536103 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme1", 00:12:16.479 "trtype": "tcp", 00:12:16.479 "traddr": "10.0.0.2", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "4420", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.479 "hdgst": false, 00:12:16.479 "ddgst": false 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 }' 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme1", 00:12:16.479 "trtype": "tcp", 00:12:16.479 "traddr": "10.0.0.2", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "4420", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.479 "hdgst": false, 00:12:16.479 "ddgst": false 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 }' 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme1", 00:12:16.479 "trtype": "tcp", 00:12:16.479 "traddr": "10.0.0.2", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "4420", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.479 "hdgst": false, 00:12:16.479 "ddgst": false 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 }' 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:12:16.479 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:16.479 "params": { 00:12:16.479 "name": "Nvme1", 00:12:16.479 "trtype": "tcp", 00:12:16.479 "traddr": "10.0.0.2", 00:12:16.479 "adrfam": "ipv4", 00:12:16.479 "trsvcid": "4420", 00:12:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.479 "hdgst": false, 00:12:16.479 "ddgst": false 00:12:16.479 }, 00:12:16.479 "method": "bdev_nvme_attach_controller" 00:12:16.479 }' 00:12:16.479 [2024-09-30 22:40:43.461556] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:16.479 [2024-09-30 22:40:43.461559] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:16.480 [2024-09-30 22:40:43.461630] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-09-30 22:40:43.461631] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:16.480 --proc-type=auto ] 00:12:16.480 [2024-09-30 22:40:43.464913] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:16.480 [2024-09-30 22:40:43.464925] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:16.480 [2024-09-30 22:40:43.464981] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:16.480 [2024-09-30 22:40:43.464993] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:16.741 [2024-09-30 22:40:43.687914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.741 [2024-09-30 22:40:43.757593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:17.002 [2024-09-30 22:40:43.780514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.002 [2024-09-30 22:40:43.851240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.002 [2024-09-30 22:40:43.856831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:17.002 [2024-09-30 22:40:43.915425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.002 [2024-09-30 22:40:43.917692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:17.002 [2024-09-30 22:40:43.981483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:12:17.263 Running I/O for 1 seconds... 00:12:17.263 Running I/O for 1 seconds... 00:12:17.524 Running I/O for 1 seconds... 00:12:17.524 Running I/O for 1 seconds... 00:12:18.095 10007.00 IOPS, 39.09 MiB/s 11416.00 IOPS, 44.59 MiB/s 00:12:18.095 Latency(us) 00:12:18.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.095 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:18.095 Nvme1n1 : 1.01 11458.46 44.76 0.00 0.00 11125.00 5980.16 20425.39 00:12:18.095 =================================================================================================================== 00:12:18.095 Total : 11458.46 44.76 0.00 0.00 11125.00 5980.16 20425.39 00:12:18.095 00:12:18.095 Latency(us) 00:12:18.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.095 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:18.095 Nvme1n1 : 1.01 10062.02 39.30 0.00 0.00 12668.79 6826.67 23483.73 00:12:18.095 =================================================================================================================== 00:12:18.095 Total : 10062.02 39.30 0.00 0.00 12668.79 6826.67 23483.73 00:12:18.356 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 536106 00:12:18.356 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 536109 00:12:18.617 188464.00 IOPS, 736.19 MiB/s 00:12:18.617 Latency(us) 00:12:18.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.617 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:18.617 Nvme1n1 : 1.00 188087.92 734.72 0.00 0.00 676.52 310.61 1979.73 00:12:18.617 =================================================================================================================== 00:12:18.617 Total : 188087.92 734.72 0.00 0.00 676.52 310.61 1979.73 00:12:18.617 12090.00 IOPS, 47.23 MiB/s 00:12:18.617 Latency(us) 00:12:18.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.617 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:18.617 Nvme1n1 : 1.01 12165.82 47.52 0.00 0.00 10488.36 4505.60 22282.24 00:12:18.617 =================================================================================================================== 00:12:18.617 Total : 12165.82 47.52 0.00 0.00 10488.36 4505.60 22282.24 00:12:18.617 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 536113 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.878 rmmod nvme_tcp 00:12:18.878 rmmod nvme_fabrics 00:12:18.878 rmmod nvme_keyring 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 535901 ']' 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 535901 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 535901 ']' 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 535901 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 535901 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 535901' 00:12:18.878 killing process with pid 535901 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 535901 00:12:18.878 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 535901 00:12:19.139 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:19.139 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:19.139 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:19.139 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.140 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.689 00:12:21.689 real 0m13.897s 00:12:21.689 user 0m21.616s 00:12:21.689 sys 0m8.075s 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 ************************************ 00:12:21.689 END TEST nvmf_bdev_io_wait 00:12:21.689 ************************************ 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 ************************************ 00:12:21.689 START TEST nvmf_queue_depth 00:12:21.689 ************************************ 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:21.689 * Looking for test storage... 00:12:21.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.689 --rc genhtml_branch_coverage=1 00:12:21.689 --rc genhtml_function_coverage=1 00:12:21.689 --rc genhtml_legend=1 00:12:21.689 --rc geninfo_all_blocks=1 00:12:21.689 --rc geninfo_unexecuted_blocks=1 00:12:21.689 00:12:21.689 ' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.689 --rc genhtml_branch_coverage=1 00:12:21.689 --rc genhtml_function_coverage=1 00:12:21.689 --rc genhtml_legend=1 00:12:21.689 --rc geninfo_all_blocks=1 00:12:21.689 --rc geninfo_unexecuted_blocks=1 00:12:21.689 00:12:21.689 ' 00:12:21.689 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.689 --rc genhtml_branch_coverage=1 00:12:21.689 --rc genhtml_function_coverage=1 00:12:21.689 --rc genhtml_legend=1 00:12:21.689 --rc geninfo_all_blocks=1 00:12:21.689 --rc geninfo_unexecuted_blocks=1 00:12:21.689 00:12:21.689 ' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:21.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.690 --rc genhtml_branch_coverage=1 00:12:21.690 --rc genhtml_function_coverage=1 00:12:21.690 --rc genhtml_legend=1 00:12:21.690 --rc geninfo_all_blocks=1 00:12:21.690 --rc geninfo_unexecuted_blocks=1 00:12:21.690 00:12:21.690 ' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:21.690 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.837 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:29.838 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:29.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:29.838 Found net devices under 0000:31:00.0: cvl_0_0 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:29.838 Found net devices under 0000:31:00.1: cvl_0_1 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.838 22:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:12:29.838 00:12:29.838 --- 10.0.0.2 ping statistics --- 00:12:29.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.838 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:29.838 00:12:29.838 --- 10.0.0.1 ping statistics --- 00:12:29.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.838 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:29.838 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=541030 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 541030 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 541030 ']' 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.839 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.839 [2024-09-30 22:40:56.307589] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:29.839 [2024-09-30 22:40:56.307657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.839 [2024-09-30 22:40:56.399883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.839 [2024-09-30 22:40:56.492015] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.839 [2024-09-30 22:40:56.492075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.839 [2024-09-30 22:40:56.492084] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.839 [2024-09-30 22:40:56.492091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.839 [2024-09-30 22:40:56.492097] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.839 [2024-09-30 22:40:56.492121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.101 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.101 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:30.101 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:30.101 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 [2024-09-30 22:40:57.168742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 Malloc0 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 [2024-09-30 22:40:57.250304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=541184 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 541184 /var/tmp/bdevperf.sock 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 541184 ']' 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.363 22:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 [2024-09-30 22:40:57.307646] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:12:30.363 [2024-09-30 22:40:57.307714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541184 ] 00:12:30.625 [2024-09-30 22:40:57.392192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.625 [2024-09-30 22:40:57.489135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.198 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.198 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:31.198 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:31.198 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.198 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:31.460 NVMe0n1 00:12:31.460 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.460 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:31.460 Running I/O for 10 seconds... 00:12:41.686 8393.00 IOPS, 32.79 MiB/s 9548.50 IOPS, 37.30 MiB/s 10244.33 IOPS, 40.02 MiB/s 10946.75 IOPS, 42.76 MiB/s 11469.20 IOPS, 44.80 MiB/s 11846.83 IOPS, 46.28 MiB/s 12134.57 IOPS, 47.40 MiB/s 12284.50 IOPS, 47.99 MiB/s 12438.44 IOPS, 48.59 MiB/s 12589.90 IOPS, 49.18 MiB/s 00:12:41.686 Latency(us) 00:12:41.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.686 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:41.686 Verification LBA range: start 0x0 length 0x4000 00:12:41.686 NVMe0n1 : 10.06 12616.07 49.28 0.00 0.00 80911.38 24794.45 74274.13 00:12:41.686 =================================================================================================================== 00:12:41.686 Total : 12616.07 49.28 0.00 0.00 80911.38 24794.45 74274.13 00:12:41.686 { 00:12:41.686 "results": [ 00:12:41.686 { 00:12:41.686 "job": "NVMe0n1", 00:12:41.686 "core_mask": "0x1", 00:12:41.686 "workload": "verify", 00:12:41.686 "status": "finished", 00:12:41.686 "verify_range": { 00:12:41.686 "start": 0, 00:12:41.686 "length": 16384 00:12:41.686 }, 00:12:41.686 "queue_depth": 1024, 00:12:41.686 "io_size": 4096, 00:12:41.686 "runtime": 10.059075, 00:12:41.686 "iops": 12616.070563148201, 00:12:41.686 "mibps": 49.28152563729766, 00:12:41.686 "io_failed": 0, 00:12:41.686 "io_timeout": 0, 00:12:41.686 "avg_latency_us": 80911.37987654905, 00:12:41.686 "min_latency_us": 24794.453333333335, 00:12:41.686 "max_latency_us": 74274.13333333333 00:12:41.686 } 00:12:41.686 ], 00:12:41.686 "core_count": 1 00:12:41.686 } 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 541184 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 541184 ']' 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 541184 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541184 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541184' 00:12:41.686 killing process with pid 541184 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 541184 00:12:41.686 Received shutdown signal, test time was about 10.000000 seconds 00:12:41.686 00:12:41.686 Latency(us) 00:12:41.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.686 =================================================================================================================== 00:12:41.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 541184 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.686 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.686 rmmod nvme_tcp 00:12:41.686 rmmod nvme_fabrics 00:12:41.686 rmmod nvme_keyring 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 541030 ']' 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 541030 ']' 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541030' 00:12:41.946 killing process with pid 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 541030 00:12:41.946 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.947 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.489 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.489 00:12:44.489 real 0m22.782s 00:12:44.489 user 0m25.843s 00:12:44.489 sys 0m7.236s 00:12:44.489 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.489 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.489 ************************************ 00:12:44.489 END TEST nvmf_queue_depth 00:12:44.489 ************************************ 00:12:44.489 22:41:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:44.490 ************************************ 00:12:44.490 START TEST nvmf_target_multipath 00:12:44.490 ************************************ 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.490 * Looking for test storage... 00:12:44.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:44.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.490 --rc genhtml_branch_coverage=1 00:12:44.490 --rc genhtml_function_coverage=1 00:12:44.490 --rc genhtml_legend=1 00:12:44.490 --rc geninfo_all_blocks=1 00:12:44.490 --rc geninfo_unexecuted_blocks=1 00:12:44.490 00:12:44.490 ' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:44.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.490 --rc genhtml_branch_coverage=1 00:12:44.490 --rc genhtml_function_coverage=1 00:12:44.490 --rc genhtml_legend=1 00:12:44.490 --rc geninfo_all_blocks=1 00:12:44.490 --rc geninfo_unexecuted_blocks=1 00:12:44.490 00:12:44.490 ' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:44.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.490 --rc genhtml_branch_coverage=1 00:12:44.490 --rc genhtml_function_coverage=1 00:12:44.490 --rc genhtml_legend=1 00:12:44.490 --rc geninfo_all_blocks=1 00:12:44.490 --rc geninfo_unexecuted_blocks=1 00:12:44.490 00:12:44.490 ' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:44.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.490 --rc genhtml_branch_coverage=1 00:12:44.490 --rc genhtml_function_coverage=1 00:12:44.490 --rc genhtml_legend=1 00:12:44.490 --rc geninfo_all_blocks=1 00:12:44.490 --rc geninfo_unexecuted_blocks=1 00:12:44.490 00:12:44.490 ' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.490 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.491 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:52.629 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:52.629 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:52.629 Found net devices under 0000:31:00.0: cvl_0_0 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:52.629 Found net devices under 0000:31:00.1: cvl_0_1 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.629 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:12:52.630 00:12:52.630 --- 10.0.0.2 ping statistics --- 00:12:52.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.630 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:12:52.630 00:12:52.630 --- 10.0.0.1 ping statistics --- 00:12:52.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.630 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:52.630 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:52.630 only one NIC for nvmf test 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.630 rmmod nvme_tcp 00:12:52.630 rmmod nvme_fabrics 00:12:52.630 rmmod nvme_keyring 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.630 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.546 00:12:54.546 real 0m10.149s 00:12:54.546 user 0m2.216s 00:12:54.546 sys 0m5.871s 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:54.546 ************************************ 00:12:54.546 END TEST nvmf_target_multipath 00:12:54.546 ************************************ 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.546 22:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.546 ************************************ 00:12:54.546 START TEST nvmf_zcopy 00:12:54.546 ************************************ 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:54.547 * Looking for test storage... 00:12:54.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.547 --rc genhtml_branch_coverage=1 00:12:54.547 --rc genhtml_function_coverage=1 00:12:54.547 --rc genhtml_legend=1 00:12:54.547 --rc geninfo_all_blocks=1 00:12:54.547 --rc geninfo_unexecuted_blocks=1 00:12:54.547 00:12:54.547 ' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.547 --rc genhtml_branch_coverage=1 00:12:54.547 --rc genhtml_function_coverage=1 00:12:54.547 --rc genhtml_legend=1 00:12:54.547 --rc geninfo_all_blocks=1 00:12:54.547 --rc geninfo_unexecuted_blocks=1 00:12:54.547 00:12:54.547 ' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.547 --rc genhtml_branch_coverage=1 00:12:54.547 --rc genhtml_function_coverage=1 00:12:54.547 --rc genhtml_legend=1 00:12:54.547 --rc geninfo_all_blocks=1 00:12:54.547 --rc geninfo_unexecuted_blocks=1 00:12:54.547 00:12:54.547 ' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.547 --rc genhtml_branch_coverage=1 00:12:54.547 --rc genhtml_function_coverage=1 00:12:54.547 --rc genhtml_legend=1 00:12:54.547 --rc geninfo_all_blocks=1 00:12:54.547 --rc geninfo_unexecuted_blocks=1 00:12:54.547 00:12:54.547 ' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.547 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.808 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:02.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:02.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:02.958 Found net devices under 0000:31:00.0: cvl_0_0 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.958 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:02.958 Found net devices under 0000:31:00.1: cvl_0_1 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.959 22:41:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:13:02.959 00:13:02.959 --- 10.0.0.2 ping statistics --- 00:13:02.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.959 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:13:02.959 00:13:02.959 --- 10.0.0.1 ping statistics --- 00:13:02.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.959 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=552203 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 552203 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 552203 ']' 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.959 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:02.959 [2024-09-30 22:41:29.378842] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:13:02.959 [2024-09-30 22:41:29.378910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.959 [2024-09-30 22:41:29.467211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.959 [2024-09-30 22:41:29.559596] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.959 [2024-09-30 22:41:29.559655] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.959 [2024-09-30 22:41:29.559663] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.959 [2024-09-30 22:41:29.559670] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.959 [2024-09-30 22:41:29.559676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.959 [2024-09-30 22:41:29.559702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.220 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.220 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:03.220 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:03.220 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.220 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 [2024-09-30 22:41:30.248420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 [2024-09-30 22:41:30.272716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 malloc0 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:13:03.488 { 00:13:03.488 "params": { 00:13:03.488 "name": "Nvme$subsystem", 00:13:03.488 "trtype": "$TEST_TRANSPORT", 00:13:03.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.488 "adrfam": "ipv4", 00:13:03.488 "trsvcid": "$NVMF_PORT", 00:13:03.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.488 "hdgst": ${hdgst:-false}, 00:13:03.488 "ddgst": ${ddgst:-false} 00:13:03.488 }, 00:13:03.488 "method": "bdev_nvme_attach_controller" 00:13:03.488 } 00:13:03.488 EOF 00:13:03.488 )") 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:13:03.488 22:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:13:03.488 "params": { 00:13:03.488 "name": "Nvme1", 00:13:03.488 "trtype": "tcp", 00:13:03.488 "traddr": "10.0.0.2", 00:13:03.488 "adrfam": "ipv4", 00:13:03.488 "trsvcid": "4420", 00:13:03.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.488 "hdgst": false, 00:13:03.488 "ddgst": false 00:13:03.488 }, 00:13:03.488 "method": "bdev_nvme_attach_controller" 00:13:03.488 }' 00:13:03.488 [2024-09-30 22:41:30.389995] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:13:03.488 [2024-09-30 22:41:30.390060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552327 ] 00:13:03.488 [2024-09-30 22:41:30.472032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.845 [2024-09-30 22:41:30.569040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.147 Running I/O for 10 seconds... 00:13:14.012 6806.00 IOPS, 53.17 MiB/s 8246.00 IOPS, 64.42 MiB/s 8752.33 IOPS, 68.38 MiB/s 9008.75 IOPS, 70.38 MiB/s 9162.60 IOPS, 71.58 MiB/s 9266.33 IOPS, 72.39 MiB/s 9338.57 IOPS, 72.96 MiB/s 9391.75 IOPS, 73.37 MiB/s 9433.78 IOPS, 73.70 MiB/s 9464.30 IOPS, 73.94 MiB/s 00:13:14.012 Latency(us) 00:13:14.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:14.012 Verification LBA range: start 0x0 length 0x1000 00:13:14.012 Nvme1n1 : 10.01 9466.60 73.96 0.00 0.00 13474.93 2348.37 28180.48 00:13:14.012 =================================================================================================================== 00:13:14.012 Total : 9466.60 73.96 0.00 0.00 13474.93 2348.37 28180.48 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=554570 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:13:14.273 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:13:14.273 { 00:13:14.273 "params": { 00:13:14.273 "name": "Nvme$subsystem", 00:13:14.273 "trtype": "$TEST_TRANSPORT", 00:13:14.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.274 "adrfam": "ipv4", 00:13:14.274 "trsvcid": "$NVMF_PORT", 00:13:14.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.274 "hdgst": ${hdgst:-false}, 00:13:14.274 "ddgst": ${ddgst:-false} 00:13:14.274 }, 00:13:14.274 "method": "bdev_nvme_attach_controller" 00:13:14.274 } 00:13:14.274 EOF 00:13:14.274 )") 00:13:14.274 [2024-09-30 22:41:41.093811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.093841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:13:14.274 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:13:14.274 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:13:14.274 22:41:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:13:14.274 "params": { 00:13:14.274 "name": "Nvme1", 00:13:14.274 "trtype": "tcp", 00:13:14.274 "traddr": "10.0.0.2", 00:13:14.274 "adrfam": "ipv4", 00:13:14.274 "trsvcid": "4420", 00:13:14.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.274 "hdgst": false, 00:13:14.274 "ddgst": false 00:13:14.274 }, 00:13:14.274 "method": "bdev_nvme_attach_controller" 00:13:14.274 }' 00:13:14.274 [2024-09-30 22:41:41.105813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.105823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.117841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.117848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.129871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.129879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.136014] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:13:14.274 [2024-09-30 22:41:41.136061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554570 ] 00:13:14.274 [2024-09-30 22:41:41.141905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.141913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.153936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.153944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.165965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.165971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.177995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.178002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.190027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.190034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.202059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.202065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.213702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.274 [2024-09-30 22:41:41.214089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.214096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.226122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.226131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.238155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.238165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.250194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.250207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.262213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.262221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.268122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.274 [2024-09-30 22:41:41.274245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.274256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.274 [2024-09-30 22:41:41.286280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.274 [2024-09-30 22:41:41.286294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.298309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.298320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.310338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.310347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.322369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.322378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.334410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.334423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.346435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.346445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.358464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.358473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.370494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.370503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.382527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.382536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.435253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.435270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 Running I/O for 5 seconds... 00:13:14.535 [2024-09-30 22:41:41.446696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.446707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.461657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.461675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.475406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.475423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.488590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.488606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.501523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.501538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.514183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.514197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.526961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.526976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.535 [2024-09-30 22:41:41.539399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.535 [2024-09-30 22:41:41.539414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.552814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.552834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.566503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.566518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.579804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.579818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.593220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.593234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.605708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.605723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.618915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.795 [2024-09-30 22:41:41.618930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.795 [2024-09-30 22:41:41.631667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.631682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.644310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.644324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.656548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.656562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.670178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.670192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.683223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.683237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.696189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.696204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.709447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.709463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.722743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.722758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.736489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.736504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.749044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.749058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.762479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.762493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.775796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.775811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.788986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.789001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.796 [2024-09-30 22:41:41.802001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.796 [2024-09-30 22:41:41.802015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.056 [2024-09-30 22:41:41.814887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.814906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.827998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.828013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.841339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.841353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.854756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.854770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.868485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.868500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.881240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.881254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.894226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.894241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.907559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.907575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.920839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.920853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.934144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.934159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.947513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.947528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.961016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.961030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.973774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.973790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:41.987004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:41.987019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.000213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.000228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.012824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.012839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.025201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.025216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.037744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.037758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.051079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.051094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.057 [2024-09-30 22:41:42.064494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.057 [2024-09-30 22:41:42.064508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.317 [2024-09-30 22:41:42.077339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.317 [2024-09-30 22:41:42.077354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.317 [2024-09-30 22:41:42.090223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.317 [2024-09-30 22:41:42.090237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.317 [2024-09-30 22:41:42.102769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.317 [2024-09-30 22:41:42.102783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.317 [2024-09-30 22:41:42.115795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.317 [2024-09-30 22:41:42.115809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.128773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.128788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.142109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.142123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.155504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.155519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.169099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.169114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.182126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.182141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.194988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.195002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.208552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.208566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.221035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.221050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.233532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.233546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.247313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.247327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.259977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.259991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.272981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.272997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.286275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.286289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.299953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.299968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.313121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.313138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.318 [2024-09-30 22:41:42.326173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.318 [2024-09-30 22:41:42.326187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.578 [2024-09-30 22:41:42.339546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.578 [2024-09-30 22:41:42.339562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.578 [2024-09-30 22:41:42.352940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.578 [2024-09-30 22:41:42.352955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.578 [2024-09-30 22:41:42.365727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.578 [2024-09-30 22:41:42.365742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.378916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.378931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.391964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.391980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.405549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.405564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.418805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.418819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.431697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.431713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.445186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.445201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 19093.00 IOPS, 149.16 MiB/s [2024-09-30 22:41:42.457776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.457791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.471774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.471789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.484262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.484277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.496919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.496934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.510054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.510068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.523111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.523126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.535814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.535837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.548015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.548030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.561490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.561504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.574441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.574455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.579 [2024-09-30 22:41:42.586841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.579 [2024-09-30 22:41:42.586856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.599674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.599689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.612956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.612971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.625306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.625321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.638491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.638506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.651869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.651884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.664839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.664854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.677269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.677284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.690721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.690736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.703821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.703835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.717222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.717236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.839 [2024-09-30 22:41:42.730461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.839 [2024-09-30 22:41:42.730476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.743374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.743388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.756233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.756247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.769418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.769432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.782378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.782397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.795941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.795956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.808850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.808864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.821394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.821408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.834025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.834040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.840 [2024-09-30 22:41:42.846377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.840 [2024-09-30 22:41:42.846391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.100 [2024-09-30 22:41:42.859174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.100 [2024-09-30 22:41:42.859189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.100 [2024-09-30 22:41:42.871383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.100 [2024-09-30 22:41:42.871398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.100 [2024-09-30 22:41:42.884268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.100 [2024-09-30 22:41:42.884283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.100 [2024-09-30 22:41:42.896890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.100 [2024-09-30 22:41:42.896908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.910130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.910145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.923105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.923119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.936433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.936447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.949759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.949774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.962638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.962653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.975784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.975799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:42.989435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:42.989449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.002841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.002856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.016305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.016319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.028814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.028832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.041018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.041033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.054791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.054805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.067738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.067752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.081039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.081054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.093644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.093659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.101 [2024-09-30 22:41:43.106186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.101 [2024-09-30 22:41:43.106201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.118985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.119000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.132349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.132363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.145706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.145721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.158592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.158606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.171710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.171725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.185120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.185133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.198586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.198600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.212176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.212190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.225089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.225103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.237459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.237473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.250017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.250031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.262801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.262815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.275738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.275756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.288411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.288426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.301742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.301756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.315106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.315120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.328547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.328562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.341254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.341268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.354182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.354197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.362 [2024-09-30 22:41:43.367459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.362 [2024-09-30 22:41:43.367473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.624 [2024-09-30 22:41:43.379674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.624 [2024-09-30 22:41:43.379689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.624 [2024-09-30 22:41:43.392936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.624 [2024-09-30 22:41:43.392950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.624 [2024-09-30 22:41:43.406069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.624 [2024-09-30 22:41:43.406083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.624 [2024-09-30 22:41:43.418718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.418733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.431887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.431907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.444413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.444427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 19212.50 IOPS, 150.10 MiB/s [2024-09-30 22:41:43.458019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.458033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.470841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.470855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.484076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.484090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.497515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.497529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.511135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.511150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.523910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.523925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.537330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.537344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.550513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.550527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.563883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.563901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.577461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.577476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.590034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.590048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.602813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.602827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.615820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.615835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.625 [2024-09-30 22:41:43.629390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.625 [2024-09-30 22:41:43.629405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.642789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.642804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.655779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.655793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.668402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.668417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.681765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.681780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.694595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.694609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.707396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.707411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.720754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.720768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.734292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.734306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.747954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.747969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.760539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.760553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.774038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.774052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.787532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.787548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.799966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.799980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.813320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.813335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.826447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.826462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.839231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.839245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.851851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.851868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.864651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.864665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.878091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.878105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.887 [2024-09-30 22:41:43.891923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.887 [2024-09-30 22:41:43.891938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.904311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.904326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.916586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.916601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.929228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.929242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.942882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.942902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.955490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.955505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.968517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.968532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.982205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.982219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:43.995842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:43.995857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.009225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.009243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.021943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.021958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.035546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.035561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.048822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.048836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.062029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.062044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.075723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.075737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.088276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.088290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.100633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.100647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.113632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.113647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.126671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.126686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.140087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.140101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.149 [2024-09-30 22:41:44.153787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.149 [2024-09-30 22:41:44.153801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.166656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.166671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.179658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.179673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.192168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.192183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.205590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.205605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.218983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.218998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.232247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.232261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.245871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.245885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.259359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.259378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.272822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.272837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.286358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.286372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.299837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.299851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.312279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.312294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.325147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.325161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.338681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.338696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.351915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.351930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.365606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.365621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.378327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.378341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.390924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.390939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.404205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.410 [2024-09-30 22:41:44.404220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.410 [2024-09-30 22:41:44.417724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.411 [2024-09-30 22:41:44.417739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.430655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.430669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.443280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.443294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 19215.33 IOPS, 150.12 MiB/s [2024-09-30 22:41:44.456640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.456654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.469839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.469854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.482919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.482934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.495989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.496004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.508684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.508702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.521267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.521282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.534170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.534185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.547402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.547417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.560754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.560768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.574128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.574142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.587444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.587458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.600719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.600733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.614153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.614168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.627261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.627275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.639840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.639854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.653006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.653021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.666150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.666164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.672 [2024-09-30 22:41:44.679644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.672 [2024-09-30 22:41:44.679659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.693343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.693358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.706663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.706678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.720086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.720100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.733644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.733659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.746919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.746933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.759982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.759996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.773362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.773376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.786931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.786945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.800260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.800274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.813318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.813333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.826472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.826486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.839917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.839932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.853139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.853153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.866166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.866180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.879115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.879130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.891467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.891481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.905140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.905155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.917984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.917999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.931516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.931530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.934 [2024-09-30 22:41:44.944978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.934 [2024-09-30 22:41:44.944992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:44.958120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:44.958135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:44.971686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:44.971700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:44.984398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:44.984412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:44.997069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:44.997084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.009629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.009645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.022634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.022649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.036260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.036274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.048610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.048624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.061393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.061408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.074059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.074074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.087452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.087467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.101083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.101098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.114017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.114031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.127108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.127123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.140539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.140553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.153328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.153343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.166501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.166515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.178873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.178888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.192037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.192052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.195 [2024-09-30 22:41:45.204999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.195 [2024-09-30 22:41:45.205013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.455 [2024-09-30 22:41:45.217742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.455 [2024-09-30 22:41:45.217757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.455 [2024-09-30 22:41:45.231158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.455 [2024-09-30 22:41:45.231172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.455 [2024-09-30 22:41:45.243625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.455 [2024-09-30 22:41:45.243639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.256579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.256593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.269785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.269799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.283176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.283191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.296807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.296821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.310197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.310211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.323039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.323053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.335927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.335941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.348427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.348441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.361900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.361913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.374401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.374414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.387863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.387877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.401411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.401425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.414542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.414557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.427693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.427708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.441213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.441229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 [2024-09-30 22:41:45.454625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.454641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.456 19235.25 IOPS, 150.28 MiB/s [2024-09-30 22:41:45.467369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.456 [2024-09-30 22:41:45.467384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.480061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.480076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.492793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.492811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.505808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.505825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.519114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.519128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.532397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.532412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.545635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.545650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.559252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.559266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.572703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.572718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.585967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.585982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.599440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.599454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.612806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.612821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.626275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.626290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.639465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.639480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.652823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.652838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.665165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.665180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.678831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.678846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.692275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.692290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.705492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.705507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.718578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.718592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.715 [2024-09-30 22:41:45.731527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.715 [2024-09-30 22:41:45.731541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.744320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.744339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.758018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.758032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.771628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.771643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.784492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.784507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.797500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.797514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.810252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.810266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.823277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.823292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.836659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.836675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.849436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.849451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.862901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.862916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.876382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.876396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.889889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.889908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.902985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.902999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.915738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.915752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.928312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.928326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.940962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.940976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.953320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.953335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.966353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.966367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.979968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.979983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.977 [2024-09-30 22:41:45.992886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.977 [2024-09-30 22:41:45.992910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.006376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.006392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.019380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.019394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.032771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.032785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.045587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.045601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.058084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.058098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.071306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.071320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.084371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.084386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.097700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.097714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.111340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.111354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.125203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.125218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.136488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.238 [2024-09-30 22:41:46.136503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.238 [2024-09-30 22:41:46.149698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.149713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.163576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.163590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.177451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.177466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.190014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.190028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.202698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.202712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.216073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.216087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.229646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.229661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.239 [2024-09-30 22:41:46.243162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.239 [2024-09-30 22:41:46.243184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.256195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.256210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.268707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.268721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.281352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.281366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.294810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.294824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.307214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.307228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.320819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.320833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.333797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.333811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.346627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.346640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.359527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.359541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.373243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.373258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.386518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.386532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.400101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.400115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.412614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.412628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.425700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.425715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.438473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.438487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.451943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.451958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 19226.60 IOPS, 150.21 MiB/s [2024-09-30 22:41:46.463612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.463627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 00:13:19.499 Latency(us) 00:13:19.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.499 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:19.499 Nvme1n1 : 5.01 19228.02 150.22 0.00 0.00 6651.50 3044.69 15510.19 00:13:19.499 =================================================================================================================== 00:13:19.499 Total : 19228.02 150.22 0.00 0.00 6651.50 3044.69 15510.19 00:13:19.499 [2024-09-30 22:41:46.473810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.473822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.485848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.485863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.497874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.497886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.499 [2024-09-30 22:41:46.509909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.499 [2024-09-30 22:41:46.509920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.521935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.521946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.533964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.533973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.545997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.546007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.558026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.558036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.570059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.570068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 [2024-09-30 22:41:46.582088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.760 [2024-09-30 22:41:46.582095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (554570) - No such process 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 554570 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 delay0 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.760 22:41:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:19.760 [2024-09-30 22:41:46.692576] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:27.893 [2024-09-30 22:41:53.729524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528fa0 is same with the state(6) to be set 00:13:27.893 Initializing NVMe Controllers 00:13:27.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:27.893 Initialization complete. Launching workers. 00:13:27.893 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 32951 00:13:27.893 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33072, failed to submit 117 00:13:27.893 success 32995, unsuccessful 77, failed 0 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.893 rmmod nvme_tcp 00:13:27.893 rmmod nvme_fabrics 00:13:27.893 rmmod nvme_keyring 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.893 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 552203 ']' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 552203 ']' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 552203' 00:13:27.894 killing process with pid 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 552203 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.894 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.278 00:13:29.278 real 0m34.747s 00:13:29.278 user 0m45.586s 00:13:29.278 sys 0m11.865s 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:29.278 ************************************ 00:13:29.278 END TEST nvmf_zcopy 00:13:29.278 ************************************ 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:29.278 ************************************ 00:13:29.278 START TEST nvmf_nmic 00:13:29.278 ************************************ 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:29.278 * Looking for test storage... 00:13:29.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.278 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:29.279 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:13:29.279 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:29.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.540 --rc genhtml_branch_coverage=1 00:13:29.540 --rc genhtml_function_coverage=1 00:13:29.540 --rc genhtml_legend=1 00:13:29.540 --rc geninfo_all_blocks=1 00:13:29.540 --rc geninfo_unexecuted_blocks=1 00:13:29.540 00:13:29.540 ' 00:13:29.540 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:29.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.540 --rc genhtml_branch_coverage=1 00:13:29.540 --rc genhtml_function_coverage=1 00:13:29.540 --rc genhtml_legend=1 00:13:29.540 --rc geninfo_all_blocks=1 00:13:29.541 --rc geninfo_unexecuted_blocks=1 00:13:29.541 00:13:29.541 ' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.541 --rc genhtml_branch_coverage=1 00:13:29.541 --rc genhtml_function_coverage=1 00:13:29.541 --rc genhtml_legend=1 00:13:29.541 --rc geninfo_all_blocks=1 00:13:29.541 --rc geninfo_unexecuted_blocks=1 00:13:29.541 00:13:29.541 ' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.541 --rc genhtml_branch_coverage=1 00:13:29.541 --rc genhtml_function_coverage=1 00:13:29.541 --rc genhtml_legend=1 00:13:29.541 --rc geninfo_all_blocks=1 00:13:29.541 --rc geninfo_unexecuted_blocks=1 00:13:29.541 00:13:29.541 ' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.541 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:37.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:37.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:37.684 Found net devices under 0000:31:00.0: cvl_0_0 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:37.684 Found net devices under 0000:31:00.1: cvl_0_1 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.684 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.685 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:13:37.685 00:13:37.685 --- 10.0.0.2 ping statistics --- 00:13:37.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.685 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:13:37.685 00:13:37.685 --- 10.0.0.1 ping statistics --- 00:13:37.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.685 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=561433 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 561433 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 561433 ']' 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.685 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.685 [2024-09-30 22:42:04.147935] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:13:37.685 [2024-09-30 22:42:04.148002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.685 [2024-09-30 22:42:04.238286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.685 [2024-09-30 22:42:04.336932] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.685 [2024-09-30 22:42:04.336993] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.685 [2024-09-30 22:42:04.337002] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.685 [2024-09-30 22:42:04.337009] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.685 [2024-09-30 22:42:04.337016] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.685 [2024-09-30 22:42:04.337203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.685 [2024-09-30 22:42:04.337368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.685 [2024-09-30 22:42:04.337527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.685 [2024-09-30 22:42:04.337527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.258 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.258 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:38.258 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:38.258 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.258 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 [2024-09-30 22:42:05.032499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 Malloc0 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 [2024-09-30 22:42:05.098256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:38.258 test case1: single bdev can't be used in multiple subsystems 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 [2024-09-30 22:42:05.134092] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:38.258 [2024-09-30 22:42:05.134117] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:38.258 [2024-09-30 22:42:05.134126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.258 request: 00:13:38.258 { 00:13:38.258 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:38.258 "namespace": { 00:13:38.258 "bdev_name": "Malloc0", 00:13:38.258 "no_auto_visible": false 00:13:38.258 }, 00:13:38.258 "method": "nvmf_subsystem_add_ns", 00:13:38.258 "req_id": 1 00:13:38.258 } 00:13:38.258 Got JSON-RPC error response 00:13:38.258 response: 00:13:38.258 { 00:13:38.258 "code": -32602, 00:13:38.258 "message": "Invalid parameters" 00:13:38.258 } 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:38.258 Adding namespace failed - expected result. 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:38.258 test case2: host connect to nvmf target in multiple paths 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 [2024-09-30 22:42:05.146306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 22:42:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.171 22:42:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:41.554 22:42:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.554 22:42:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:41.554 22:42:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.554 22:42:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:41.554 22:42:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:43.464 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:43.464 [global] 00:13:43.464 thread=1 00:13:43.464 invalidate=1 00:13:43.464 rw=write 00:13:43.464 time_based=1 00:13:43.464 runtime=1 00:13:43.464 ioengine=libaio 00:13:43.464 direct=1 00:13:43.464 bs=4096 00:13:43.464 iodepth=1 00:13:43.464 norandommap=0 00:13:43.464 numjobs=1 00:13:43.464 00:13:43.464 verify_dump=1 00:13:43.464 verify_backlog=512 00:13:43.464 verify_state_save=0 00:13:43.464 do_verify=1 00:13:43.464 verify=crc32c-intel 00:13:43.464 [job0] 00:13:43.464 filename=/dev/nvme0n1 00:13:43.464 Could not set queue depth (nvme0n1) 00:13:43.724 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.724 fio-3.35 00:13:43.724 Starting 1 thread 00:13:45.105 00:13:45.105 job0: (groupid=0, jobs=1): err= 0: pid=563423: Mon Sep 30 22:42:11 2024 00:13:45.105 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:45.105 slat (nsec): min=10887, max=59266, avg=27482.05, stdev=2739.97 00:13:45.105 clat (usec): min=692, max=1197, avg=972.39, stdev=62.78 00:13:45.105 lat (usec): min=703, max=1224, avg=999.87, stdev=62.66 00:13:45.105 clat percentiles (usec): 00:13:45.105 | 1.00th=[ 783], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 930], 00:13:45.105 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:13:45.105 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:13:45.105 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1205], 99.95th=[ 1205], 00:13:45.105 | 99.99th=[ 1205] 00:13:45.105 write: IOPS=751, BW=3005KiB/s (3077kB/s)(3008KiB/1001msec); 0 zone resets 00:13:45.105 slat (usec): min=9, max=30973, avg=71.41, stdev=1128.42 00:13:45.105 clat (usec): min=260, max=813, avg=561.51, stdev=99.94 00:13:45.105 lat (usec): min=270, max=31463, avg=632.92, stdev=1130.59 00:13:45.105 clat percentiles (usec): 00:13:45.105 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 424], 20.00th=[ 474], 00:13:45.105 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:13:45.105 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 725], 00:13:45.105 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 816], 99.95th=[ 816], 00:13:45.105 | 99.99th=[ 816] 00:13:45.105 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.105 lat (usec) : 500=16.38%, 750=42.25%, 1000=28.56% 00:13:45.106 lat (msec) : 2=12.82% 00:13:45.106 cpu : usr=2.40%, sys=5.10%, ctx=1267, majf=0, minf=1 00:13:45.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.106 issued rwts: total=512,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.106 00:13:45.106 Run status group 0 (all jobs): 00:13:45.106 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:13:45.106 WRITE: bw=3005KiB/s (3077kB/s), 3005KiB/s-3005KiB/s (3077kB/s-3077kB/s), io=3008KiB (3080kB), run=1001-1001msec 00:13:45.106 00:13:45.106 Disk stats (read/write): 00:13:45.106 nvme0n1: ios=537/581, merge=0/0, ticks=1462/275, in_queue=1737, util=98.70% 00:13:45.106 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:45.106 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.106 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:45.106 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:45.106 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.106 rmmod nvme_tcp 00:13:45.106 rmmod nvme_fabrics 00:13:45.106 rmmod nvme_keyring 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 561433 ']' 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 561433 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 561433 ']' 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 561433 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.106 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561433 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561433' 00:13:45.365 killing process with pid 561433 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 561433 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 561433 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:45.365 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.366 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.908 00:13:47.908 real 0m18.229s 00:13:47.908 user 0m44.959s 00:13:47.908 sys 0m6.708s 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:47.908 ************************************ 00:13:47.908 END TEST nvmf_nmic 00:13:47.908 ************************************ 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:47.908 ************************************ 00:13:47.908 START TEST nvmf_fio_target 00:13:47.908 ************************************ 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:47.908 * Looking for test storage... 00:13:47.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.908 --rc genhtml_branch_coverage=1 00:13:47.908 --rc genhtml_function_coverage=1 00:13:47.908 --rc genhtml_legend=1 00:13:47.908 --rc geninfo_all_blocks=1 00:13:47.908 --rc geninfo_unexecuted_blocks=1 00:13:47.908 00:13:47.908 ' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.908 --rc genhtml_branch_coverage=1 00:13:47.908 --rc genhtml_function_coverage=1 00:13:47.908 --rc genhtml_legend=1 00:13:47.908 --rc geninfo_all_blocks=1 00:13:47.908 --rc geninfo_unexecuted_blocks=1 00:13:47.908 00:13:47.908 ' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.908 --rc genhtml_branch_coverage=1 00:13:47.908 --rc genhtml_function_coverage=1 00:13:47.908 --rc genhtml_legend=1 00:13:47.908 --rc geninfo_all_blocks=1 00:13:47.908 --rc geninfo_unexecuted_blocks=1 00:13:47.908 00:13:47.908 ' 00:13:47.908 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.909 --rc genhtml_branch_coverage=1 00:13:47.909 --rc genhtml_function_coverage=1 00:13:47.909 --rc genhtml_legend=1 00:13:47.909 --rc geninfo_all_blocks=1 00:13:47.909 --rc geninfo_unexecuted_blocks=1 00:13:47.909 00:13:47.909 ' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.909 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.049 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:56.050 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:56.050 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:56.050 Found net devices under 0000:31:00.0: cvl_0_0 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:56.050 Found net devices under 0000:31:00.1: cvl_0_1 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.050 22:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:13:56.050 00:13:56.050 --- 10.0.0.2 ping statistics --- 00:13:56.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.050 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:56.050 00:13:56.050 --- 10.0.0.1 ping statistics --- 00:13:56.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.050 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:13:56.050 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=568099 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 568099 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 568099 ']' 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.051 22:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.051 [2024-09-30 22:42:22.407270] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:13:56.051 [2024-09-30 22:42:22.407384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.051 [2024-09-30 22:42:22.498675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.051 [2024-09-30 22:42:22.596248] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.051 [2024-09-30 22:42:22.596307] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.051 [2024-09-30 22:42:22.596316] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.051 [2024-09-30 22:42:22.596323] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.051 [2024-09-30 22:42:22.596329] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.051 [2024-09-30 22:42:22.596489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.051 [2024-09-30 22:42:22.596681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.051 [2024-09-30 22:42:22.596844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.051 [2024-09-30 22:42:22.596844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.312 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.573 [2024-09-30 22:42:23.442673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.573 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.834 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:56.834 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.094 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:57.094 22:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.355 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:57.355 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.355 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:57.355 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:57.617 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.878 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:57.878 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.139 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:58.139 22:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.400 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:58.400 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:58.400 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.660 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:58.660 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.920 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:58.920 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.920 22:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.180 [2024-09-30 22:42:26.043937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.180 22:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:59.440 22:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:59.440 22:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:01.378 22:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:03.447 22:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:03.447 22:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:03.447 [global] 00:14:03.447 thread=1 00:14:03.447 invalidate=1 00:14:03.447 rw=write 00:14:03.447 time_based=1 00:14:03.447 runtime=1 00:14:03.447 ioengine=libaio 00:14:03.447 direct=1 00:14:03.447 bs=4096 00:14:03.447 iodepth=1 00:14:03.447 norandommap=0 00:14:03.447 numjobs=1 00:14:03.447 00:14:03.447 verify_dump=1 00:14:03.447 verify_backlog=512 00:14:03.447 verify_state_save=0 00:14:03.447 do_verify=1 00:14:03.447 verify=crc32c-intel 00:14:03.447 [job0] 00:14:03.447 filename=/dev/nvme0n1 00:14:03.447 [job1] 00:14:03.447 filename=/dev/nvme0n2 00:14:03.447 [job2] 00:14:03.447 filename=/dev/nvme0n3 00:14:03.447 [job3] 00:14:03.447 filename=/dev/nvme0n4 00:14:03.447 Could not set queue depth (nvme0n1) 00:14:03.447 Could not set queue depth (nvme0n2) 00:14:03.447 Could not set queue depth (nvme0n3) 00:14:03.447 Could not set queue depth (nvme0n4) 00:14:03.447 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.447 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.447 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.447 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.447 fio-3.35 00:14:03.447 Starting 4 threads 00:14:04.833 00:14:04.833 job0: (groupid=0, jobs=1): err= 0: pid=569804: Mon Sep 30 22:42:31 2024 00:14:04.833 read: IOPS=54, BW=216KiB/s (222kB/s)(224KiB/1035msec) 00:14:04.833 slat (nsec): min=8081, max=45225, avg=26185.52, stdev=4546.19 00:14:04.833 clat (usec): min=758, max=42060, avg=13352.29, stdev=18833.74 00:14:04.833 lat (usec): min=787, max=42086, avg=13378.48, stdev=18834.08 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 758], 5.00th=[ 816], 10.00th=[ 898], 20.00th=[ 1004], 00:14:04.833 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:14:04.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:14:04.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:04.833 | 99.99th=[42206] 00:14:04.833 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:14:04.833 slat (nsec): min=9893, max=66881, avg=29053.05, stdev=11400.78 00:14:04.833 clat (usec): min=108, max=1727, avg=513.87, stdev=191.50 00:14:04.833 lat (usec): min=122, max=1762, avg=542.93, stdev=195.49 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 129], 5.00th=[ 235], 10.00th=[ 262], 20.00th=[ 302], 00:14:04.833 | 30.00th=[ 388], 40.00th=[ 469], 50.00th=[ 545], 60.00th=[ 594], 00:14:04.833 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:14:04.833 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 1729], 99.95th=[ 1729], 00:14:04.833 | 99.99th=[ 1729] 00:14:04.833 bw ( KiB/s): min= 4096, max= 4096, per=37.61%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.833 lat (usec) : 250=6.51%, 500=33.45%, 750=44.54%, 1000=7.22% 00:14:04.833 lat (msec) : 2=5.28%, 50=2.99% 00:14:04.833 cpu : usr=0.87%, sys=1.45%, ctx=572, majf=0, minf=1 00:14:04.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.833 job1: (groupid=0, jobs=1): err= 0: pid=569811: Mon Sep 30 22:42:31 2024 00:14:04.833 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:04.833 slat (nsec): min=25470, max=55258, avg=26623.49, stdev=3119.51 00:14:04.833 clat (usec): min=452, max=1341, avg=1045.20, stdev=144.57 00:14:04.833 lat (usec): min=479, max=1367, avg=1071.82, stdev=144.45 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 635], 5.00th=[ 750], 10.00th=[ 848], 20.00th=[ 938], 00:14:04.833 | 30.00th=[ 988], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:14:04.833 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:14:04.833 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:14:04.833 | 99.99th=[ 1336] 00:14:04.833 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:14:04.833 slat (nsec): min=8921, max=59536, avg=31737.03, stdev=8024.74 00:14:04.833 clat (usec): min=190, max=1108, avg=541.21, stdev=149.14 00:14:04.833 lat (usec): min=213, max=1142, avg=572.95, stdev=151.50 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 241], 5.00th=[ 293], 10.00th=[ 334], 20.00th=[ 404], 00:14:04.833 | 30.00th=[ 461], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 586], 00:14:04.833 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 775], 00:14:04.833 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1106], 99.95th=[ 1106], 00:14:04.833 | 99.99th=[ 1106] 00:14:04.833 bw ( KiB/s): min= 4096, max= 4096, per=37.61%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.833 lat (usec) : 250=1.33%, 500=22.15%, 750=33.93%, 1000=15.29% 00:14:04.833 lat (msec) : 2=27.30% 00:14:04.833 cpu : usr=2.60%, sys=5.30%, ctx=1282, majf=0, minf=2 00:14:04.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.833 job2: (groupid=0, jobs=1): err= 0: pid=569840: Mon Sep 30 22:42:31 2024 00:14:04.833 read: IOPS=18, BW=74.4KiB/s (76.1kB/s)(76.0KiB/1022msec) 00:14:04.833 slat (nsec): min=26731, max=27671, avg=27018.84, stdev=202.25 00:14:04.833 clat (usec): min=1072, max=42023, avg=37656.25, stdev=12853.62 00:14:04.833 lat (usec): min=1099, max=42050, avg=37683.27, stdev=12853.60 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[ 1287], 20.00th=[41681], 00:14:04.833 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:14:04.833 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:04.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:04.833 | 99.99th=[42206] 00:14:04.833 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:14:04.833 slat (nsec): min=9033, max=54531, avg=29024.88, stdev=10697.37 00:14:04.833 clat (usec): min=187, max=931, avg=561.42, stdev=139.30 00:14:04.833 lat (usec): min=201, max=964, avg=590.44, stdev=143.65 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 247], 5.00th=[ 314], 10.00th=[ 371], 20.00th=[ 453], 00:14:04.833 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 603], 00:14:04.833 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 783], 00:14:04.833 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 930], 99.95th=[ 930], 00:14:04.833 | 99.99th=[ 930] 00:14:04.833 bw ( KiB/s): min= 4096, max= 4096, per=37.61%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.833 lat (usec) : 250=1.32%, 500=31.07%, 750=56.31%, 1000=7.72% 00:14:04.833 lat (msec) : 2=0.38%, 50=3.20% 00:14:04.833 cpu : usr=1.08%, sys=1.67%, ctx=531, majf=0, minf=2 00:14:04.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.833 job3: (groupid=0, jobs=1): err= 0: pid=569853: Mon Sep 30 22:42:31 2024 00:14:04.833 read: IOPS=716, BW=2865KiB/s (2934kB/s)(2868KiB/1001msec) 00:14:04.833 slat (nsec): min=3537, max=15694, avg=4718.85, stdev=720.50 00:14:04.833 clat (usec): min=387, max=1024, avg=756.26, stdev=86.38 00:14:04.833 lat (usec): min=392, max=1029, avg=760.98, stdev=86.40 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 523], 5.00th=[ 611], 10.00th=[ 644], 20.00th=[ 685], 00:14:04.833 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 783], 00:14:04.833 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 889], 00:14:04.833 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:14:04.833 | 99.99th=[ 1029] 00:14:04.833 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:04.833 slat (nsec): min=4409, max=51728, avg=5799.92, stdev=1715.29 00:14:04.833 clat (usec): min=172, max=761, avg=432.71, stdev=92.54 00:14:04.833 lat (usec): min=178, max=767, avg=438.51, stdev=92.61 00:14:04.833 clat percentiles (usec): 00:14:04.833 | 1.00th=[ 223], 5.00th=[ 262], 10.00th=[ 302], 20.00th=[ 355], 00:14:04.833 | 30.00th=[ 383], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 474], 00:14:04.833 | 70.00th=[ 490], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 570], 00:14:04.833 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 742], 99.95th=[ 758], 00:14:04.833 | 99.99th=[ 758] 00:14:04.833 bw ( KiB/s): min= 4096, max= 4096, per=37.61%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.833 lat (usec) : 250=2.35%, 500=42.62%, 750=31.65%, 1000=23.21% 00:14:04.833 lat (msec) : 2=0.17% 00:14:04.833 cpu : usr=0.20%, sys=1.10%, ctx=1744, majf=0, minf=1 00:14:04.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.833 issued rwts: total=717,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.833 00:14:04.833 Run status group 0 (all jobs): 00:14:04.833 READ: bw=5040KiB/s (5161kB/s), 74.4KiB/s-2865KiB/s (76.1kB/s-2934kB/s), io=5216KiB (5341kB), run=1001-1035msec 00:14:04.833 WRITE: bw=10.6MiB/s (11.2MB/s), 1979KiB/s-4092KiB/s (2026kB/s-4190kB/s), io=11.0MiB (11.5MB), run=1001-1035msec 00:14:04.833 00:14:04.833 Disk stats (read/write): 00:14:04.833 nvme0n1: ios=84/512, merge=0/0, ticks=1797/247, in_queue=2044, util=88.69% 00:14:04.833 nvme0n2: ios=497/512, merge=0/0, ticks=528/213, in_queue=741, util=84.72% 00:14:04.833 nvme0n3: ios=68/512, merge=0/0, ticks=992/234, in_queue=1226, util=94.97% 00:14:04.833 nvme0n4: ios=535/858, merge=0/0, ticks=1272/362, in_queue=1634, util=98.39% 00:14:04.833 22:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:04.833 [global] 00:14:04.833 thread=1 00:14:04.834 invalidate=1 00:14:04.834 rw=randwrite 00:14:04.834 time_based=1 00:14:04.834 runtime=1 00:14:04.834 ioengine=libaio 00:14:04.834 direct=1 00:14:04.834 bs=4096 00:14:04.834 iodepth=1 00:14:04.834 norandommap=0 00:14:04.834 numjobs=1 00:14:04.834 00:14:04.834 verify_dump=1 00:14:04.834 verify_backlog=512 00:14:04.834 verify_state_save=0 00:14:04.834 do_verify=1 00:14:04.834 verify=crc32c-intel 00:14:04.834 [job0] 00:14:04.834 filename=/dev/nvme0n1 00:14:04.834 [job1] 00:14:04.834 filename=/dev/nvme0n2 00:14:04.834 [job2] 00:14:04.834 filename=/dev/nvme0n3 00:14:04.834 [job3] 00:14:04.834 filename=/dev/nvme0n4 00:14:05.124 Could not set queue depth (nvme0n1) 00:14:05.124 Could not set queue depth (nvme0n2) 00:14:05.124 Could not set queue depth (nvme0n3) 00:14:05.124 Could not set queue depth (nvme0n4) 00:14:05.391 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.391 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.391 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.391 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.391 fio-3.35 00:14:05.391 Starting 4 threads 00:14:06.792 00:14:06.792 job0: (groupid=0, jobs=1): err= 0: pid=570334: Mon Sep 30 22:42:33 2024 00:14:06.792 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:14:06.792 slat (nsec): min=26476, max=26992, avg=26737.95, stdev=128.66 00:14:06.792 clat (usec): min=816, max=41248, avg=39049.14, stdev=8761.83 00:14:06.792 lat (usec): min=842, max=41275, avg=39075.88, stdev=8761.86 00:14:06.792 clat percentiles (usec): 00:14:06.792 | 1.00th=[ 816], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:14:06.792 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:06.792 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:06.792 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:06.793 | 99.99th=[41157] 00:14:06.793 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:14:06.793 slat (nsec): min=9672, max=52669, avg=27728.30, stdev=11221.52 00:14:06.793 clat (usec): min=97, max=1126, avg=390.54, stdev=180.34 00:14:06.793 lat (usec): min=114, max=1158, avg=418.27, stdev=186.23 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 113], 5.00th=[ 126], 10.00th=[ 145], 20.00th=[ 235], 00:14:06.793 | 30.00th=[ 289], 40.00th=[ 330], 50.00th=[ 371], 60.00th=[ 420], 00:14:06.793 | 70.00th=[ 478], 80.00th=[ 545], 90.00th=[ 619], 95.00th=[ 701], 00:14:06.793 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 1123], 99.95th=[ 1123], 00:14:06.793 | 99.99th=[ 1123] 00:14:06.793 bw ( KiB/s): min= 4096, max= 4096, per=39.30%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.793 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.793 lat (usec) : 100=0.19%, 250=20.83%, 500=48.78%, 750=22.89%, 1000=3.38% 00:14:06.793 lat (msec) : 2=0.19%, 50=3.75% 00:14:06.793 cpu : usr=0.67%, sys=1.45%, ctx=537, majf=0, minf=1 00:14:06.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.793 job1: (groupid=0, jobs=1): err= 0: pid=570336: Mon Sep 30 22:42:33 2024 00:14:06.793 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:06.793 slat (nsec): min=25856, max=59669, avg=27079.90, stdev=3558.72 00:14:06.793 clat (usec): min=646, max=1350, avg=1101.39, stdev=110.85 00:14:06.793 lat (usec): min=672, max=1377, avg=1128.47, stdev=110.79 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 775], 5.00th=[ 881], 10.00th=[ 955], 20.00th=[ 1020], 00:14:06.793 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:14:06.793 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:14:06.793 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1352], 99.95th=[ 1352], 00:14:06.793 | 99.99th=[ 1352] 00:14:06.793 write: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec); 0 zone resets 00:14:06.793 slat (nsec): min=9743, max=53262, avg=29082.02, stdev=10124.49 00:14:06.793 clat (usec): min=155, max=1055, avg=595.77, stdev=139.11 00:14:06.793 lat (usec): min=165, max=1090, avg=624.86, stdev=144.97 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 247], 5.00th=[ 330], 10.00th=[ 396], 20.00th=[ 486], 00:14:06.793 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:14:06.793 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:14:06.793 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:14:06.793 | 99.99th=[ 1057] 00:14:06.793 bw ( KiB/s): min= 4096, max= 4096, per=39.30%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.793 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.793 lat (usec) : 250=0.68%, 500=12.55%, 750=37.66%, 1000=12.38% 00:14:06.793 lat (msec) : 2=36.72% 00:14:06.793 cpu : usr=1.70%, sys=3.40%, ctx=1173, majf=0, minf=1 00:14:06.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 issued rwts: total=512,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.793 job2: (groupid=0, jobs=1): err= 0: pid=570342: Mon Sep 30 22:42:33 2024 00:14:06.793 read: IOPS=57, BW=232KiB/s (237kB/s)(232KiB/1002msec) 00:14:06.793 slat (nsec): min=26215, max=27183, avg=26661.14, stdev=192.46 00:14:06.793 clat (usec): min=952, max=42062, avg=12153.64, stdev=18038.28 00:14:06.793 lat (usec): min=978, max=42088, avg=12180.31, stdev=18038.29 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 955], 5.00th=[ 979], 10.00th=[ 1004], 20.00th=[ 1057], 00:14:06.793 | 30.00th=[ 1090], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:14:06.793 | 70.00th=[ 1287], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:06.793 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:06.793 | 99.99th=[42206] 00:14:06.793 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:14:06.793 slat (nsec): min=3185, max=80927, avg=28200.16, stdev=12185.53 00:14:06.793 clat (usec): min=133, max=940, avg=538.12, stdev=150.72 00:14:06.793 lat (usec): min=137, max=974, avg=566.32, stdev=155.37 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 169], 5.00th=[ 273], 10.00th=[ 330], 20.00th=[ 408], 00:14:06.793 | 30.00th=[ 461], 40.00th=[ 498], 50.00th=[ 545], 60.00th=[ 594], 00:14:06.793 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 766], 00:14:06.793 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:14:06.793 | 99.99th=[ 938] 00:14:06.793 bw ( KiB/s): min= 4096, max= 4096, per=39.30%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.793 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.793 lat (usec) : 250=3.51%, 500=32.63%, 750=47.72%, 1000=6.67% 00:14:06.793 lat (msec) : 2=6.67%, 50=2.81% 00:14:06.793 cpu : usr=0.50%, sys=1.80%, ctx=573, majf=0, minf=1 00:14:06.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 issued rwts: total=58,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.793 job3: (groupid=0, jobs=1): err= 0: pid=570346: Mon Sep 30 22:42:33 2024 00:14:06.793 read: IOPS=611, BW=2446KiB/s (2504kB/s)(2448KiB/1001msec) 00:14:06.793 slat (nsec): min=7146, max=56987, avg=25036.87, stdev=6677.87 00:14:06.793 clat (usec): min=329, max=1016, avg=723.57, stdev=124.69 00:14:06.793 lat (usec): min=356, max=1043, avg=748.61, stdev=125.74 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 416], 5.00th=[ 510], 10.00th=[ 562], 20.00th=[ 611], 00:14:06.793 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 734], 60.00th=[ 766], 00:14:06.793 | 70.00th=[ 799], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 906], 00:14:06.793 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1020], 00:14:06.793 | 99.99th=[ 1020] 00:14:06.793 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:06.793 slat (nsec): min=10038, max=52770, avg=31747.45, stdev=8062.71 00:14:06.793 clat (usec): min=107, max=939, avg=484.03, stdev=140.59 00:14:06.793 lat (usec): min=118, max=973, avg=515.77, stdev=142.56 00:14:06.793 clat percentiles (usec): 00:14:06.793 | 1.00th=[ 151], 5.00th=[ 258], 10.00th=[ 289], 20.00th=[ 367], 00:14:06.793 | 30.00th=[ 404], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[ 519], 00:14:06.793 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 668], 95.00th=[ 709], 00:14:06.793 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 906], 99.95th=[ 938], 00:14:06.793 | 99.99th=[ 938] 00:14:06.793 bw ( KiB/s): min= 4096, max= 4096, per=39.30%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.793 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.793 lat (usec) : 250=2.51%, 500=32.76%, 750=45.72%, 1000=18.89% 00:14:06.793 lat (msec) : 2=0.12% 00:14:06.793 cpu : usr=2.20%, sys=5.20%, ctx=1639, majf=0, minf=1 00:14:06.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.794 issued rwts: total=612,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.794 00:14:06.794 Run status group 0 (all jobs): 00:14:06.794 READ: bw=4631KiB/s (4743kB/s), 80.8KiB/s-2446KiB/s (82.8kB/s-2504kB/s), io=4812KiB (4927kB), run=1001-1039msec 00:14:06.794 WRITE: bw=10.2MiB/s (10.7MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1039msec 00:14:06.794 00:14:06.794 Disk stats (read/write): 00:14:06.794 nvme0n1: ios=62/512, merge=0/0, ticks=916/187, in_queue=1103, util=85.27% 00:14:06.794 nvme0n2: ios=499/512, merge=0/0, ticks=675/291, in_queue=966, util=89.30% 00:14:06.794 nvme0n3: ios=93/512, merge=0/0, ticks=682/263, in_queue=945, util=92.41% 00:14:06.794 nvme0n4: ios=569/865, merge=0/0, ticks=509/398, in_queue=907, util=94.12% 00:14:06.794 22:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:06.794 [global] 00:14:06.794 thread=1 00:14:06.794 invalidate=1 00:14:06.794 rw=write 00:14:06.794 time_based=1 00:14:06.794 runtime=1 00:14:06.794 ioengine=libaio 00:14:06.794 direct=1 00:14:06.794 bs=4096 00:14:06.794 iodepth=128 00:14:06.794 norandommap=0 00:14:06.794 numjobs=1 00:14:06.794 00:14:06.794 verify_dump=1 00:14:06.794 verify_backlog=512 00:14:06.794 verify_state_save=0 00:14:06.794 do_verify=1 00:14:06.794 verify=crc32c-intel 00:14:06.794 [job0] 00:14:06.794 filename=/dev/nvme0n1 00:14:06.794 [job1] 00:14:06.794 filename=/dev/nvme0n2 00:14:06.794 [job2] 00:14:06.794 filename=/dev/nvme0n3 00:14:06.794 [job3] 00:14:06.794 filename=/dev/nvme0n4 00:14:06.794 Could not set queue depth (nvme0n1) 00:14:06.794 Could not set queue depth (nvme0n2) 00:14:06.794 Could not set queue depth (nvme0n3) 00:14:06.794 Could not set queue depth (nvme0n4) 00:14:07.060 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:07.060 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:07.060 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:07.060 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:07.060 fio-3.35 00:14:07.060 Starting 4 threads 00:14:08.470 00:14:08.470 job0: (groupid=0, jobs=1): err= 0: pid=570864: Mon Sep 30 22:42:35 2024 00:14:08.470 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:14:08.470 slat (nsec): min=880, max=16078k, avg=74433.43, stdev=567897.32 00:14:08.470 clat (usec): min=1476, max=48216, avg=9565.99, stdev=5679.47 00:14:08.470 lat (usec): min=1480, max=48222, avg=9640.42, stdev=5725.81 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 2180], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6259], 00:14:08.470 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8717], 00:14:08.470 | 70.00th=[ 9634], 80.00th=[11863], 90.00th=[14877], 95.00th=[18220], 00:14:08.470 | 99.00th=[33817], 99.50th=[39584], 99.90th=[47449], 99.95th=[47973], 00:14:08.470 | 99.99th=[47973] 00:14:08.470 write: IOPS=6402, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1002msec); 0 zone resets 00:14:08.470 slat (nsec): min=1547, max=10384k, avg=73212.93, stdev=476888.22 00:14:08.470 clat (usec): min=359, max=48214, avg=10662.75, stdev=8356.83 00:14:08.470 lat (usec): min=362, max=48226, avg=10735.96, stdev=8406.78 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 742], 5.00th=[ 2114], 10.00th=[ 4359], 20.00th=[ 5866], 00:14:08.470 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 7046], 60.00th=[ 7832], 00:14:08.470 | 70.00th=[10028], 80.00th=[16319], 90.00th=[26084], 95.00th=[29492], 00:14:08.470 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:14:08.470 | 99.99th=[47973] 00:14:08.470 bw ( KiB/s): min=20560, max=29744, per=27.17%, avg=25152.00, stdev=6494.07, samples=2 00:14:08.470 iops : min= 5140, max= 7436, avg=6288.00, stdev=1623.52, samples=2 00:14:08.470 lat (usec) : 500=0.07%, 750=0.49%, 1000=0.54% 00:14:08.470 lat (msec) : 2=1.87%, 4=2.83%, 10=64.56%, 20=19.21%, 50=10.41% 00:14:08.470 cpu : usr=5.00%, sys=6.89%, ctx=451, majf=0, minf=1 00:14:08.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:08.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.470 issued rwts: total=6144,6415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.470 job1: (groupid=0, jobs=1): err= 0: pid=570866: Mon Sep 30 22:42:35 2024 00:14:08.470 read: IOPS=4210, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec) 00:14:08.470 slat (nsec): min=899, max=13062k, avg=93260.20, stdev=681662.17 00:14:08.470 clat (usec): min=1014, max=45816, avg=11862.59, stdev=6124.82 00:14:08.470 lat (usec): min=1021, max=45824, avg=11955.85, stdev=6181.53 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 1434], 5.00th=[ 3195], 10.00th=[ 6783], 20.00th=[ 7439], 00:14:08.470 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11731], 00:14:08.470 | 70.00th=[14484], 80.00th=[16319], 90.00th=[19268], 95.00th=[21627], 00:14:08.470 | 99.00th=[34341], 99.50th=[39060], 99.90th=[45876], 99.95th=[45876], 00:14:08.470 | 99.99th=[45876] 00:14:08.470 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:14:08.470 slat (nsec): min=1525, max=9999.1k, avg=115949.52, stdev=618613.74 00:14:08.470 clat (usec): min=835, max=73441, avg=16759.62, stdev=14447.37 00:14:08.470 lat (usec): min=845, max=73450, avg=16875.57, stdev=14548.77 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 1254], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 7046], 00:14:08.470 | 30.00th=[ 7635], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[13829], 00:14:08.470 | 70.00th=[20055], 80.00th=[25560], 90.00th=[32113], 95.00th=[53216], 00:14:08.470 | 99.00th=[68682], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:14:08.470 | 99.99th=[73925] 00:14:08.470 bw ( KiB/s): min=14952, max=21904, per=19.91%, avg=18428.00, stdev=4915.81, samples=2 00:14:08.470 iops : min= 3738, max= 5476, avg=4607.00, stdev=1228.95, samples=2 00:14:08.470 lat (usec) : 1000=0.03% 00:14:08.470 lat (msec) : 2=1.71%, 4=3.22%, 10=41.28%, 20=33.91%, 50=16.62% 00:14:08.470 lat (msec) : 100=3.23% 00:14:08.470 cpu : usr=3.19%, sys=4.99%, ctx=392, majf=0, minf=1 00:14:08.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:08.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.470 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.470 job2: (groupid=0, jobs=1): err= 0: pid=570885: Mon Sep 30 22:42:35 2024 00:14:08.470 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:14:08.470 slat (nsec): min=912, max=9513.0k, avg=70472.37, stdev=478423.08 00:14:08.470 clat (usec): min=4727, max=25721, avg=8585.34, stdev=2837.01 00:14:08.470 lat (usec): min=4735, max=25728, avg=8655.81, stdev=2880.46 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 5080], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 6980], 00:14:08.470 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7963], 00:14:08.470 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[12256], 95.00th=[14615], 00:14:08.470 | 99.00th=[20055], 99.50th=[23200], 99.90th=[25297], 99.95th=[25822], 00:14:08.470 | 99.99th=[25822] 00:14:08.470 write: IOPS=7094, BW=27.7MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:14:08.470 slat (nsec): min=1556, max=8506.1k, avg=69513.04, stdev=361220.11 00:14:08.470 clat (usec): min=2650, max=34374, avg=9785.42, stdev=5743.99 00:14:08.470 lat (usec): min=2679, max=34382, avg=9854.93, stdev=5782.85 00:14:08.470 clat percentiles (usec): 00:14:08.470 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6587], 00:14:08.470 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 8029], 00:14:08.470 | 70.00th=[ 8717], 80.00th=[13435], 90.00th=[16319], 95.00th=[24249], 00:14:08.470 | 99.00th=[32900], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:14:08.470 | 99.99th=[34341] 00:14:08.470 bw ( KiB/s): min=24576, max=31672, per=30.38%, avg=28124.00, stdev=5017.63, samples=2 00:14:08.470 iops : min= 6144, max= 7918, avg=7031.00, stdev=1254.41, samples=2 00:14:08.470 lat (msec) : 4=0.14%, 10=77.79%, 20=17.48%, 50=4.59% 00:14:08.470 cpu : usr=5.36%, sys=6.05%, ctx=885, majf=0, minf=1 00:14:08.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:08.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.471 issued rwts: total=6656,7158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.471 job3: (groupid=0, jobs=1): err= 0: pid=570892: Mon Sep 30 22:42:35 2024 00:14:08.471 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:14:08.471 slat (nsec): min=922, max=12632k, avg=84075.75, stdev=584016.18 00:14:08.471 clat (usec): min=1952, max=51185, avg=10751.88, stdev=5593.02 00:14:08.471 lat (usec): min=1961, max=51213, avg=10835.96, stdev=5642.29 00:14:08.471 clat percentiles (usec): 00:14:08.471 | 1.00th=[ 2507], 5.00th=[ 3982], 10.00th=[ 6325], 20.00th=[ 8094], 00:14:08.471 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:14:08.471 | 70.00th=[10683], 80.00th=[11469], 90.00th=[14484], 95.00th=[20317], 00:14:08.471 | 99.00th=[39060], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:08.471 | 99.99th=[51119] 00:14:08.471 write: IOPS=5141, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1005msec); 0 zone resets 00:14:08.471 slat (nsec): min=1575, max=11120k, avg=100209.81, stdev=628060.25 00:14:08.471 clat (usec): min=666, max=101153, avg=14031.36, stdev=14848.16 00:14:08.471 lat (usec): min=684, max=101161, avg=14131.57, stdev=14935.39 00:14:08.471 clat percentiles (usec): 00:14:08.471 | 1.00th=[ 1434], 5.00th=[ 2933], 10.00th=[ 4293], 20.00th=[ 6128], 00:14:08.471 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 10552], 00:14:08.471 | 70.00th=[ 14091], 80.00th=[ 18220], 90.00th=[ 23987], 95.00th=[ 43254], 00:14:08.471 | 99.00th=[ 93848], 99.50th=[ 99091], 99.90th=[101188], 99.95th=[101188], 00:14:08.471 | 99.99th=[101188] 00:14:08.471 bw ( KiB/s): min=12288, max=28672, per=22.13%, avg=20480.00, stdev=11585.24, samples=2 00:14:08.471 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:14:08.471 lat (usec) : 750=0.02%, 1000=0.08% 00:14:08.471 lat (msec) : 2=1.14%, 4=6.05%, 10=45.88%, 20=36.48%, 50=8.44% 00:14:08.471 lat (msec) : 100=1.78%, 250=0.14% 00:14:08.471 cpu : usr=2.79%, sys=5.98%, ctx=462, majf=0, minf=2 00:14:08.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:08.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.471 issued rwts: total=5120,5167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.471 00:14:08.471 Run status group 0 (all jobs): 00:14:08.471 READ: bw=85.7MiB/s (89.9MB/s), 16.4MiB/s-25.8MiB/s (17.2MB/s-27.0MB/s), io=86.5MiB (90.7MB), run=1002-1009msec 00:14:08.471 WRITE: bw=90.4MiB/s (94.8MB/s), 17.9MiB/s-27.7MiB/s (18.8MB/s-29.1MB/s), io=91.2MiB (95.6MB), run=1002-1009msec 00:14:08.471 00:14:08.471 Disk stats (read/write): 00:14:08.471 nvme0n1: ios=5170/5207, merge=0/0, ticks=44947/52840, in_queue=97787, util=87.17% 00:14:08.471 nvme0n2: ios=3109/3183, merge=0/0, ticks=35576/59388, in_queue=94964, util=87.16% 00:14:08.471 nvme0n3: ios=5649/6143, merge=0/0, ticks=24716/26277, in_queue=50993, util=91.66% 00:14:08.471 nvme0n4: ios=4453/4608, merge=0/0, ticks=26105/30351, in_queue=56456, util=89.30% 00:14:08.471 22:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:08.471 [global] 00:14:08.471 thread=1 00:14:08.471 invalidate=1 00:14:08.471 rw=randwrite 00:14:08.471 time_based=1 00:14:08.471 runtime=1 00:14:08.471 ioengine=libaio 00:14:08.471 direct=1 00:14:08.471 bs=4096 00:14:08.471 iodepth=128 00:14:08.471 norandommap=0 00:14:08.471 numjobs=1 00:14:08.471 00:14:08.471 verify_dump=1 00:14:08.471 verify_backlog=512 00:14:08.471 verify_state_save=0 00:14:08.471 do_verify=1 00:14:08.471 verify=crc32c-intel 00:14:08.471 [job0] 00:14:08.471 filename=/dev/nvme0n1 00:14:08.471 [job1] 00:14:08.471 filename=/dev/nvme0n2 00:14:08.471 [job2] 00:14:08.471 filename=/dev/nvme0n3 00:14:08.471 [job3] 00:14:08.471 filename=/dev/nvme0n4 00:14:08.471 Could not set queue depth (nvme0n1) 00:14:08.471 Could not set queue depth (nvme0n2) 00:14:08.471 Could not set queue depth (nvme0n3) 00:14:08.471 Could not set queue depth (nvme0n4) 00:14:08.737 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.737 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.737 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.737 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.737 fio-3.35 00:14:08.737 Starting 4 threads 00:14:10.149 00:14:10.149 job0: (groupid=0, jobs=1): err= 0: pid=571403: Mon Sep 30 22:42:36 2024 00:14:10.149 read: IOPS=9073, BW=35.4MiB/s (37.2MB/s)(35.6MiB/1004msec) 00:14:10.149 slat (nsec): min=927, max=6924.2k, avg=57440.30, stdev=419971.60 00:14:10.149 clat (usec): min=1329, max=14575, avg=7518.32, stdev=1838.78 00:14:10.149 lat (usec): min=2413, max=14987, avg=7575.76, stdev=1859.76 00:14:10.149 clat percentiles (usec): 00:14:10.149 | 1.00th=[ 3097], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6259], 00:14:10.149 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7635], 00:14:10.149 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[11076], 00:14:10.149 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14091], 99.95th=[14484], 00:14:10.149 | 99.99th=[14615] 00:14:10.149 write: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec); 0 zone resets 00:14:10.149 slat (nsec): min=1540, max=6198.9k, avg=46335.18, stdev=313025.84 00:14:10.149 clat (usec): min=1141, max=14577, avg=6381.71, stdev=1453.57 00:14:10.149 lat (usec): min=1151, max=14581, avg=6428.05, stdev=1471.81 00:14:10.149 clat percentiles (usec): 00:14:10.149 | 1.00th=[ 2704], 5.00th=[ 3752], 10.00th=[ 4178], 20.00th=[ 5145], 00:14:10.149 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6783], 00:14:10.150 | 70.00th=[ 6915], 80.00th=[ 7308], 90.00th=[ 7898], 95.00th=[ 8455], 00:14:10.150 | 99.00th=[10290], 99.50th=[10552], 99.90th=[12911], 99.95th=[13829], 00:14:10.150 | 99.99th=[14615] 00:14:10.150 bw ( KiB/s): min=36048, max=37680, per=34.57%, avg=36864.00, stdev=1154.00, samples=2 00:14:10.150 iops : min= 9012, max= 9420, avg=9216.00, stdev=288.50, samples=2 00:14:10.150 lat (msec) : 2=0.09%, 4=4.52%, 10=89.60%, 20=5.79% 00:14:10.150 cpu : usr=5.58%, sys=10.67%, ctx=722, majf=0, minf=1 00:14:10.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:10.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.150 issued rwts: total=9110,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.150 job1: (groupid=0, jobs=1): err= 0: pid=571417: Mon Sep 30 22:42:36 2024 00:14:10.150 read: IOPS=7596, BW=29.7MiB/s (31.1MB/s)(29.7MiB/1002msec) 00:14:10.150 slat (nsec): min=897, max=3542.5k, avg=67669.97, stdev=356165.54 00:14:10.150 clat (usec): min=1043, max=12901, avg=8542.11, stdev=969.67 00:14:10.150 lat (usec): min=3385, max=12928, avg=8609.78, stdev=1007.73 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 7898], 00:14:10.150 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:14:10.150 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[ 9896], 00:14:10.150 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12518], 99.95th=[12518], 00:14:10.150 | 99.99th=[12911] 00:14:10.150 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:14:10.150 slat (nsec): min=1497, max=2909.4k, avg=59443.77, stdev=295339.10 00:14:10.150 clat (usec): min=4934, max=12525, avg=8040.63, stdev=882.95 00:14:10.150 lat (usec): min=4936, max=12557, avg=8100.07, stdev=919.55 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 6063], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:14:10.150 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:14:10.150 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:14:10.150 | 99.00th=[10421], 99.50th=[10945], 99.90th=[12125], 99.95th=[12387], 00:14:10.150 | 99.99th=[12518] 00:14:10.150 bw ( KiB/s): min=32768, max=32768, per=30.73%, avg=32768.00, stdev= 0.00, samples=1 00:14:10.150 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:14:10.150 lat (msec) : 2=0.01%, 4=0.27%, 10=96.75%, 20=2.97% 00:14:10.150 cpu : usr=4.10%, sys=5.29%, ctx=871, majf=0, minf=1 00:14:10.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:10.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.150 issued rwts: total=7612,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.150 job2: (groupid=0, jobs=1): err= 0: pid=571436: Mon Sep 30 22:42:36 2024 00:14:10.150 read: IOPS=5861, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1006msec) 00:14:10.150 slat (nsec): min=970, max=13307k, avg=77130.35, stdev=643162.57 00:14:10.150 clat (usec): min=2278, max=29533, avg=10780.54, stdev=3917.81 00:14:10.150 lat (usec): min=3215, max=35268, avg=10857.67, stdev=3961.26 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 8225], 00:14:10.150 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[10290], 00:14:10.150 | 70.00th=[11338], 80.00th=[13042], 90.00th=[16450], 95.00th=[18220], 00:14:10.150 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:14:10.150 | 99.99th=[29492] 00:14:10.150 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:14:10.150 slat (nsec): min=1594, max=14926k, avg=73072.06, stdev=661021.43 00:14:10.150 clat (usec): min=604, max=63619, avg=10427.43, stdev=7780.11 00:14:10.150 lat (usec): min=613, max=63629, avg=10500.50, stdev=7841.87 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 1221], 5.00th=[ 3720], 10.00th=[ 5080], 20.00th=[ 6652], 00:14:10.150 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8717], 00:14:10.150 | 70.00th=[10814], 80.00th=[13960], 90.00th=[16319], 95.00th=[19530], 00:14:10.150 | 99.00th=[52691], 99.50th=[59507], 99.90th=[63701], 99.95th=[63701], 00:14:10.150 | 99.99th=[63701] 00:14:10.150 bw ( KiB/s): min=24576, max=24576, per=23.04%, avg=24576.00, stdev= 0.00, samples=2 00:14:10.150 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:14:10.150 lat (usec) : 750=0.04%, 1000=0.07% 00:14:10.150 lat (msec) : 2=1.30%, 4=1.89%, 10=60.84%, 20=31.47%, 50=3.73% 00:14:10.150 lat (msec) : 100=0.67% 00:14:10.150 cpu : usr=5.17%, sys=6.77%, ctx=317, majf=0, minf=1 00:14:10.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:10.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.150 issued rwts: total=5897,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.150 job3: (groupid=0, jobs=1): err= 0: pid=571443: Mon Sep 30 22:42:36 2024 00:14:10.150 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:14:10.150 slat (nsec): min=943, max=12798k, avg=103983.10, stdev=802317.34 00:14:10.150 clat (usec): min=4927, max=33510, avg=13295.46, stdev=4504.84 00:14:10.150 lat (usec): min=4933, max=33535, avg=13399.44, stdev=4579.22 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 6456], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 9372], 00:14:10.150 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[13435], 60.00th=[14222], 00:14:10.150 | 70.00th=[14746], 80.00th=[15533], 90.00th=[21365], 95.00th=[22152], 00:14:10.150 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28443], 99.95th=[30016], 00:14:10.150 | 99.99th=[33424] 00:14:10.150 write: IOPS=3827, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1009msec); 0 zone resets 00:14:10.150 slat (nsec): min=1745, max=11554k, avg=156217.15, stdev=890390.02 00:14:10.150 clat (usec): min=1146, max=91584, avg=20659.78, stdev=20440.85 00:14:10.150 lat (usec): min=1155, max=91592, avg=20816.00, stdev=20577.55 00:14:10.150 clat percentiles (usec): 00:14:10.150 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 7242], 20.00th=[ 8455], 00:14:10.150 | 30.00th=[ 9110], 40.00th=[11207], 50.00th=[12911], 60.00th=[14746], 00:14:10.150 | 70.00th=[16909], 80.00th=[24773], 90.00th=[58983], 95.00th=[73925], 00:14:10.150 | 99.00th=[84411], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:14:10.150 | 99.99th=[91751] 00:14:10.150 bw ( KiB/s): min=12288, max=17584, per=14.00%, avg=14936.00, stdev=3744.84, samples=2 00:14:10.150 iops : min= 3072, max= 4396, avg=3734.00, stdev=936.21, samples=2 00:14:10.150 lat (msec) : 2=0.11%, 4=0.27%, 10=35.74%, 20=45.81%, 50=12.09% 00:14:10.150 lat (msec) : 100=5.99% 00:14:10.150 cpu : usr=2.78%, sys=4.37%, ctx=305, majf=0, minf=1 00:14:10.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:10.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.150 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.150 00:14:10.150 Run status group 0 (all jobs): 00:14:10.150 READ: bw=101MiB/s (106MB/s), 13.9MiB/s-35.4MiB/s (14.5MB/s-37.2MB/s), io=102MiB (107MB), run=1002-1009msec 00:14:10.151 WRITE: bw=104MiB/s (109MB/s), 15.0MiB/s-35.9MiB/s (15.7MB/s-37.6MB/s), io=105MiB (110MB), run=1002-1009msec 00:14:10.151 00:14:10.151 Disk stats (read/write): 00:14:10.151 nvme0n1: ios=7653/7680, merge=0/0, ticks=53730/46684, in_queue=100414, util=91.68% 00:14:10.151 nvme0n2: ios=6223/6656, merge=0/0, ticks=17418/16388, in_queue=33806, util=96.23% 00:14:10.151 nvme0n3: ios=4655/4999, merge=0/0, ticks=49113/51996, in_queue=101109, util=96.41% 00:14:10.151 nvme0n4: ios=3113/3209, merge=0/0, ticks=40250/61755, in_queue=102005, util=100.00% 00:14:10.151 22:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:10.151 22:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=571697 00:14:10.151 22:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:10.151 22:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:10.151 [global] 00:14:10.151 thread=1 00:14:10.151 invalidate=1 00:14:10.151 rw=read 00:14:10.151 time_based=1 00:14:10.151 runtime=10 00:14:10.151 ioengine=libaio 00:14:10.151 direct=1 00:14:10.151 bs=4096 00:14:10.151 iodepth=1 00:14:10.151 norandommap=1 00:14:10.151 numjobs=1 00:14:10.151 00:14:10.151 [job0] 00:14:10.151 filename=/dev/nvme0n1 00:14:10.151 [job1] 00:14:10.151 filename=/dev/nvme0n2 00:14:10.151 [job2] 00:14:10.151 filename=/dev/nvme0n3 00:14:10.151 [job3] 00:14:10.151 filename=/dev/nvme0n4 00:14:10.151 Could not set queue depth (nvme0n1) 00:14:10.151 Could not set queue depth (nvme0n2) 00:14:10.151 Could not set queue depth (nvme0n3) 00:14:10.151 Could not set queue depth (nvme0n4) 00:14:10.414 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.414 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.414 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.414 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.414 fio-3.35 00:14:10.414 Starting 4 threads 00:14:12.957 22:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:13.217 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11030528, buflen=4096 00:14:13.217 fio: pid=571968, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:13.217 22:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:13.217 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11427840, buflen=4096 00:14:13.217 fio: pid=571962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:13.217 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.217 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:13.477 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.477 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:13.477 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=303104, buflen=4096 00:14:13.477 fio: pid=571928, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:13.738 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17457152, buflen=4096 00:14:13.738 fio: pid=571942, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:13.738 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.738 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:13.738 00:14:13.738 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=571928: Mon Sep 30 22:42:40 2024 00:14:13.738 read: IOPS=25, BW=98.9KiB/s (101kB/s)(296KiB/2993msec) 00:14:13.738 slat (usec): min=24, max=1621, avg=46.93, stdev=184.24 00:14:13.738 clat (usec): min=1045, max=42059, avg=40091.65, stdev=8084.41 00:14:13.738 lat (usec): min=1074, max=42998, avg=40138.88, stdev=8088.78 00:14:13.738 clat percentiles (usec): 00:14:13.738 | 1.00th=[ 1045], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:13.738 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:14:13.738 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:13.738 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:13.738 | 99.99th=[42206] 00:14:13.738 bw ( KiB/s): min= 88, max= 104, per=0.79%, avg=99.20, stdev= 7.16, samples=5 00:14:13.738 iops : min= 22, max= 26, avg=24.80, stdev= 1.79, samples=5 00:14:13.738 lat (msec) : 2=4.00%, 50=94.67% 00:14:13.738 cpu : usr=0.10%, sys=0.00%, ctx=76, majf=0, minf=1 00:14:13.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.738 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=571942: Mon Sep 30 22:42:40 2024 00:14:13.738 read: IOPS=1355, BW=5421KiB/s (5551kB/s)(16.6MiB/3145msec) 00:14:13.738 slat (usec): min=6, max=22927, avg=38.36, stdev=458.42 00:14:13.738 clat (usec): min=148, max=5580, avg=687.86, stdev=142.89 00:14:13.738 lat (usec): min=156, max=23358, avg=726.22, stdev=474.07 00:14:13.738 clat percentiles (usec): 00:14:13.738 | 1.00th=[ 326], 5.00th=[ 441], 10.00th=[ 537], 20.00th=[ 603], 00:14:13.738 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 693], 60.00th=[ 734], 00:14:13.738 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:14:13.738 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 979], 99.95th=[ 1020], 00:14:13.738 | 99.99th=[ 5604] 00:14:13.738 bw ( KiB/s): min= 5040, max= 6032, per=43.78%, avg=5467.83, stdev=396.92, samples=6 00:14:13.738 iops : min= 1260, max= 1508, avg=1366.83, stdev=99.12, samples=6 00:14:13.738 lat (usec) : 250=0.14%, 500=7.18%, 750=57.57%, 1000=35.00% 00:14:13.738 lat (msec) : 2=0.07%, 10=0.02% 00:14:13.738 cpu : usr=1.53%, sys=3.53%, ctx=4269, majf=0, minf=2 00:14:13.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 issued rwts: total=4263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.738 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=571962: Mon Sep 30 22:42:40 2024 00:14:13.738 read: IOPS=1004, BW=4017KiB/s (4114kB/s)(10.9MiB/2778msec) 00:14:13.738 slat (usec): min=6, max=20432, avg=38.17, stdev=479.04 00:14:13.738 clat (usec): min=262, max=41723, avg=943.41, stdev=787.96 00:14:13.738 lat (usec): min=288, max=41749, avg=981.59, stdev=923.68 00:14:13.738 clat percentiles (usec): 00:14:13.738 | 1.00th=[ 586], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 783], 00:14:13.738 | 30.00th=[ 824], 40.00th=[ 881], 50.00th=[ 938], 60.00th=[ 988], 00:14:13.738 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1172], 00:14:13.738 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1483], 00:14:13.738 | 99.99th=[41681] 00:14:13.738 bw ( KiB/s): min= 3520, max= 4408, per=32.82%, avg=4099.20, stdev=354.74, samples=5 00:14:13.738 iops : min= 880, max= 1102, avg=1024.80, stdev=88.69, samples=5 00:14:13.738 lat (usec) : 500=0.36%, 750=12.76%, 1000=50.59% 00:14:13.738 lat (msec) : 2=36.22%, 50=0.04% 00:14:13.738 cpu : usr=1.04%, sys=3.06%, ctx=2793, majf=0, minf=1 00:14:13.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 issued rwts: total=2791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.738 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=571968: Mon Sep 30 22:42:40 2024 00:14:13.738 read: IOPS=1038, BW=4154KiB/s (4254kB/s)(10.5MiB/2593msec) 00:14:13.738 slat (nsec): min=6465, max=63018, avg=26844.37, stdev=5101.40 00:14:13.738 clat (usec): min=339, max=40883, avg=921.46, stdev=784.91 00:14:13.738 lat (usec): min=367, max=40910, avg=948.30, stdev=785.11 00:14:13.738 clat percentiles (usec): 00:14:13.738 | 1.00th=[ 482], 5.00th=[ 619], 10.00th=[ 709], 20.00th=[ 791], 00:14:13.738 | 30.00th=[ 848], 40.00th=[ 898], 50.00th=[ 930], 60.00th=[ 963], 00:14:13.738 | 70.00th=[ 996], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1123], 00:14:13.738 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1336], 00:14:13.738 | 99.99th=[40633] 00:14:13.738 bw ( KiB/s): min= 3992, max= 4504, per=33.63%, avg=4200.00, stdev=224.86, samples=5 00:14:13.738 iops : min= 998, max= 1126, avg=1050.00, stdev=56.21, samples=5 00:14:13.738 lat (usec) : 500=1.37%, 750=12.88%, 1000=57.61% 00:14:13.738 lat (msec) : 2=28.06%, 50=0.04% 00:14:13.738 cpu : usr=2.20%, sys=3.82%, ctx=2694, majf=0, minf=2 00:14:13.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.738 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.738 00:14:13.738 Run status group 0 (all jobs): 00:14:13.738 READ: bw=12.2MiB/s (12.8MB/s), 98.9KiB/s-5421KiB/s (101kB/s-5551kB/s), io=38.4MiB (40.2MB), run=2593-3145msec 00:14:13.738 00:14:13.738 Disk stats (read/write): 00:14:13.738 nvme0n1: ios=70/0, merge=0/0, ticks=2801/0, in_queue=2801, util=94.73% 00:14:13.738 nvme0n2: ios=4222/0, merge=0/0, ticks=2811/0, in_queue=2811, util=93.78% 00:14:13.738 nvme0n3: ios=2647/0, merge=0/0, ticks=2471/0, in_queue=2471, util=95.99% 00:14:13.738 nvme0n4: ios=2694/0, merge=0/0, ticks=2251/0, in_queue=2251, util=96.20% 00:14:13.738 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.738 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:14.001 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.001 22:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:14.262 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.262 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 571697 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:14.523 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:14.783 nvmf hotplug test: fio failed as expected 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:14.783 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:14.784 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:14.784 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.784 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:14.784 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.784 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.784 rmmod nvme_tcp 00:14:15.045 rmmod nvme_fabrics 00:14:15.045 rmmod nvme_keyring 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 568099 ']' 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 568099 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 568099 ']' 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 568099 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568099 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568099' 00:14:15.045 killing process with pid 568099 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 568099 00:14:15.045 22:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 568099 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.045 22:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.593 00:14:17.593 real 0m29.657s 00:14:17.593 user 2m34.318s 00:14:17.593 sys 0m9.878s 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.593 ************************************ 00:14:17.593 END TEST nvmf_fio_target 00:14:17.593 ************************************ 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:17.593 ************************************ 00:14:17.593 START TEST nvmf_bdevio 00:14:17.593 ************************************ 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.593 * Looking for test storage... 00:14:17.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:17.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.593 --rc genhtml_branch_coverage=1 00:14:17.593 --rc genhtml_function_coverage=1 00:14:17.593 --rc genhtml_legend=1 00:14:17.593 --rc geninfo_all_blocks=1 00:14:17.593 --rc geninfo_unexecuted_blocks=1 00:14:17.593 00:14:17.593 ' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:17.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.593 --rc genhtml_branch_coverage=1 00:14:17.593 --rc genhtml_function_coverage=1 00:14:17.593 --rc genhtml_legend=1 00:14:17.593 --rc geninfo_all_blocks=1 00:14:17.593 --rc geninfo_unexecuted_blocks=1 00:14:17.593 00:14:17.593 ' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:17.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.593 --rc genhtml_branch_coverage=1 00:14:17.593 --rc genhtml_function_coverage=1 00:14:17.593 --rc genhtml_legend=1 00:14:17.593 --rc geninfo_all_blocks=1 00:14:17.593 --rc geninfo_unexecuted_blocks=1 00:14:17.593 00:14:17.593 ' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:17.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.593 --rc genhtml_branch_coverage=1 00:14:17.593 --rc genhtml_function_coverage=1 00:14:17.593 --rc genhtml_legend=1 00:14:17.593 --rc geninfo_all_blocks=1 00:14:17.593 --rc geninfo_unexecuted_blocks=1 00:14:17.593 00:14:17.593 ' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.593 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.594 22:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.735 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:25.736 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:25.736 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:25.736 Found net devices under 0000:31:00.0: cvl_0_0 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:25.736 Found net devices under 0000:31:00.1: cvl_0_1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.736 22:42:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:14:25.736 00:14:25.736 --- 10.0.0.2 ping statistics --- 00:14:25.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.736 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:14:25.736 00:14:25.736 --- 10.0.0.1 ping statistics --- 00:14:25.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.736 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=577313 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 577313 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 577313 ']' 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.736 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.737 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.737 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.737 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:25.737 [2024-09-30 22:42:52.169929] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:14:25.737 [2024-09-30 22:42:52.169992] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.737 [2024-09-30 22:42:52.261725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.737 [2024-09-30 22:42:52.381155] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.737 [2024-09-30 22:42:52.381228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.737 [2024-09-30 22:42:52.381239] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.737 [2024-09-30 22:42:52.381249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.737 [2024-09-30 22:42:52.381257] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.737 [2024-09-30 22:42:52.381429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:25.737 [2024-09-30 22:42:52.381591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:25.737 [2024-09-30 22:42:52.381753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:25.737 [2024-09-30 22:42:52.381756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.997 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.997 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:14:25.997 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:25.997 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.997 22:42:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 [2024-09-30 22:42:53.049279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 Malloc0 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 [2024-09-30 22:42:53.114135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:26.258 { 00:14:26.258 "params": { 00:14:26.258 "name": "Nvme$subsystem", 00:14:26.258 "trtype": "$TEST_TRANSPORT", 00:14:26.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.258 "adrfam": "ipv4", 00:14:26.258 "trsvcid": "$NVMF_PORT", 00:14:26.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.258 "hdgst": ${hdgst:-false}, 00:14:26.258 "ddgst": ${ddgst:-false} 00:14:26.258 }, 00:14:26.258 "method": "bdev_nvme_attach_controller" 00:14:26.258 } 00:14:26.258 EOF 00:14:26.258 )") 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:14:26.258 22:42:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:26.258 "params": { 00:14:26.258 "name": "Nvme1", 00:14:26.258 "trtype": "tcp", 00:14:26.258 "traddr": "10.0.0.2", 00:14:26.258 "adrfam": "ipv4", 00:14:26.258 "trsvcid": "4420", 00:14:26.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.258 "hdgst": false, 00:14:26.258 "ddgst": false 00:14:26.258 }, 00:14:26.258 "method": "bdev_nvme_attach_controller" 00:14:26.258 }' 00:14:26.258 [2024-09-30 22:42:53.171181] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:14:26.258 [2024-09-30 22:42:53.171245] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577517 ] 00:14:26.258 [2024-09-30 22:42:53.255969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:26.519 [2024-09-30 22:42:53.355686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.519 [2024-09-30 22:42:53.355851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.519 [2024-09-30 22:42:53.355851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.519 I/O targets: 00:14:26.519 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:26.519 00:14:26.519 00:14:26.519 CUnit - A unit testing framework for C - Version 2.1-3 00:14:26.519 http://cunit.sourceforge.net/ 00:14:26.519 00:14:26.519 00:14:26.519 Suite: bdevio tests on: Nvme1n1 00:14:26.780 Test: blockdev write read block ...passed 00:14:26.780 Test: blockdev write zeroes read block ...passed 00:14:26.780 Test: blockdev write zeroes read no split ...passed 00:14:26.780 Test: blockdev write zeroes read split ...passed 00:14:26.780 Test: blockdev write zeroes read split partial ...passed 00:14:26.780 Test: blockdev reset ...[2024-09-30 22:42:53.698771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:26.780 [2024-09-30 22:42:53.698877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc861d0 (9): Bad file descriptor 00:14:26.780 [2024-09-30 22:42:53.794276] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:26.780 passed 00:14:26.780 Test: blockdev write read 8 blocks ...passed 00:14:26.780 Test: blockdev write read size > 128k ...passed 00:14:26.780 Test: blockdev write read invalid size ...passed 00:14:27.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:27.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:27.040 Test: blockdev write read max offset ...passed 00:14:27.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:27.040 Test: blockdev writev readv 8 blocks ...passed 00:14:27.040 Test: blockdev writev readv 30 x 1block ...passed 00:14:27.040 Test: blockdev writev readv block ...passed 00:14:27.040 Test: blockdev writev readv size > 128k ...passed 00:14:27.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:27.040 Test: blockdev comparev and writev ...[2024-09-30 22:42:53.966032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.040 [2024-09-30 22:42:53.966083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:27.040 [2024-09-30 22:42:53.966101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.040 [2024-09-30 22:42:53.966110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:27.040 [2024-09-30 22:42:53.966409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.040 [2024-09-30 22:42:53.966423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:27.040 [2024-09-30 22:42:53.966438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.040 [2024-09-30 22:42:53.966446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:27.040 [2024-09-30 22:42:53.966725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.041 [2024-09-30 22:42:53.966739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:53.966753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.041 [2024-09-30 22:42:53.966762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:53.967074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.041 [2024-09-30 22:42:53.967096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:53.967111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:27.041 [2024-09-30 22:42:53.967120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:27.041 passed 00:14:27.041 Test: blockdev nvme passthru rw ...passed 00:14:27.041 Test: blockdev nvme passthru vendor specific ...[2024-09-30 22:42:54.049336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.041 [2024-09-30 22:42:54.049355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:54.049467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.041 [2024-09-30 22:42:54.049478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:54.049590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.041 [2024-09-30 22:42:54.049601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:27.041 [2024-09-30 22:42:54.049701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.041 [2024-09-30 22:42:54.049714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:27.041 passed 00:14:27.301 Test: blockdev nvme admin passthru ...passed 00:14:27.301 Test: blockdev copy ...passed 00:14:27.301 00:14:27.301 Run Summary: Type Total Ran Passed Failed Inactive 00:14:27.301 suites 1 1 n/a 0 0 00:14:27.301 tests 23 23 23 0 0 00:14:27.301 asserts 152 152 152 0 n/a 00:14:27.301 00:14:27.301 Elapsed time = 1.177 seconds 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.301 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.301 rmmod nvme_tcp 00:14:27.301 rmmod nvme_fabrics 00:14:27.562 rmmod nvme_keyring 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 577313 ']' 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 577313 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 577313 ']' 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 577313 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 577313 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 577313' 00:14:27.562 killing process with pid 577313 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 577313 00:14:27.562 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 577313 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.821 22:42:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:29.733 00:14:29.733 real 0m12.491s 00:14:29.733 user 0m13.170s 00:14:29.733 sys 0m6.441s 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:29.733 ************************************ 00:14:29.733 END TEST nvmf_bdevio 00:14:29.733 ************************************ 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:29.733 00:14:29.733 real 5m9.022s 00:14:29.733 user 11m41.313s 00:14:29.733 sys 1m52.926s 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:29.733 22:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:29.733 ************************************ 00:14:29.733 END TEST nvmf_target_core 00:14:29.733 ************************************ 00:14:29.994 22:42:56 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:29.994 22:42:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:29.994 22:42:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:29.994 22:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.994 ************************************ 00:14:29.994 START TEST nvmf_target_extra 00:14:29.994 ************************************ 00:14:29.994 22:42:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:29.994 * Looking for test storage... 00:14:29.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:29.994 22:42:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:29.994 22:42:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:14:29.994 22:42:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.256 --rc genhtml_branch_coverage=1 00:14:30.256 --rc genhtml_function_coverage=1 00:14:30.256 --rc genhtml_legend=1 00:14:30.256 --rc geninfo_all_blocks=1 00:14:30.256 --rc geninfo_unexecuted_blocks=1 00:14:30.256 00:14:30.256 ' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.256 --rc genhtml_branch_coverage=1 00:14:30.256 --rc genhtml_function_coverage=1 00:14:30.256 --rc genhtml_legend=1 00:14:30.256 --rc geninfo_all_blocks=1 00:14:30.256 --rc geninfo_unexecuted_blocks=1 00:14:30.256 00:14:30.256 ' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.256 --rc genhtml_branch_coverage=1 00:14:30.256 --rc genhtml_function_coverage=1 00:14:30.256 --rc genhtml_legend=1 00:14:30.256 --rc geninfo_all_blocks=1 00:14:30.256 --rc geninfo_unexecuted_blocks=1 00:14:30.256 00:14:30.256 ' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.256 --rc genhtml_branch_coverage=1 00:14:30.256 --rc genhtml_function_coverage=1 00:14:30.256 --rc genhtml_legend=1 00:14:30.256 --rc geninfo_all_blocks=1 00:14:30.256 --rc geninfo_unexecuted_blocks=1 00:14:30.256 00:14:30.256 ' 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.256 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.257 ************************************ 00:14:30.257 START TEST nvmf_example 00:14:30.257 ************************************ 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:30.257 * Looking for test storage... 00:14:30.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:14:30.257 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:30.519 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:30.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.520 --rc genhtml_branch_coverage=1 00:14:30.520 --rc genhtml_function_coverage=1 00:14:30.520 --rc genhtml_legend=1 00:14:30.520 --rc geninfo_all_blocks=1 00:14:30.520 --rc geninfo_unexecuted_blocks=1 00:14:30.520 00:14:30.520 ' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:30.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.520 --rc genhtml_branch_coverage=1 00:14:30.520 --rc genhtml_function_coverage=1 00:14:30.520 --rc genhtml_legend=1 00:14:30.520 --rc geninfo_all_blocks=1 00:14:30.520 --rc geninfo_unexecuted_blocks=1 00:14:30.520 00:14:30.520 ' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:30.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.520 --rc genhtml_branch_coverage=1 00:14:30.520 --rc genhtml_function_coverage=1 00:14:30.520 --rc genhtml_legend=1 00:14:30.520 --rc geninfo_all_blocks=1 00:14:30.520 --rc geninfo_unexecuted_blocks=1 00:14:30.520 00:14:30.520 ' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:30.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.520 --rc genhtml_branch_coverage=1 00:14:30.520 --rc genhtml_function_coverage=1 00:14:30.520 --rc genhtml_legend=1 00:14:30.520 --rc geninfo_all_blocks=1 00:14:30.520 --rc geninfo_unexecuted_blocks=1 00:14:30.520 00:14:30.520 ' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.520 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.521 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:30.521 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:30.521 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.521 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.666 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:38.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:38.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:38.667 Found net devices under 0000:31:00.0: cvl_0_0 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:38.667 Found net devices under 0000:31:00.1: cvl_0_1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.667 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:14:38.667 00:14:38.667 --- 10.0.0.2 ping statistics --- 00:14:38.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.667 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:14:38.667 00:14:38.667 --- 10.0.0.1 ping statistics --- 00:14:38.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.667 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.667 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=582163 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 582163 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 582163 ']' 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.668 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.241 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:14:39.241 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:39.241 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.241 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:39.241 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:51.478 Initializing NVMe Controllers 00:14:51.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.478 Initialization complete. Launching workers. 00:14:51.478 ======================================================== 00:14:51.478 Latency(us) 00:14:51.478 Device Information : IOPS MiB/s Average min max 00:14:51.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18642.07 72.82 3434.57 629.06 15508.60 00:14:51.478 ======================================================== 00:14:51.478 Total : 18642.07 72.82 3434.57 629.06 15508.60 00:14:51.478 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.478 rmmod nvme_tcp 00:14:51.478 rmmod nvme_fabrics 00:14:51.478 rmmod nvme_keyring 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:51.478 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 582163 ']' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 582163 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 582163 ']' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 582163 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582163 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582163' 00:14:51.479 killing process with pid 582163 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 582163 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 582163 00:14:51.479 nvmf threads initialize successfully 00:14:51.479 bdev subsystem init successfully 00:14:51.479 created a nvmf target service 00:14:51.479 create targets's poll groups done 00:14:51.479 all subsystems of target started 00:14:51.479 nvmf target is running 00:14:51.479 all subsystems of target stopped 00:14:51.479 destroy targets's poll groups done 00:14:51.479 destroyed the nvmf target service 00:14:51.479 bdev subsystem finish successfully 00:14:51.479 nvmf threads destroy successfully 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.479 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:51.739 00:14:51.739 real 0m21.612s 00:14:51.739 user 0m46.547s 00:14:51.739 sys 0m7.195s 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.739 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:51.739 ************************************ 00:14:51.739 END TEST nvmf_example 00:14:51.739 ************************************ 00:14:52.001 22:43:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:52.001 22:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:52.001 22:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.001 22:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.001 ************************************ 00:14:52.001 START TEST nvmf_filesystem 00:14:52.002 ************************************ 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:52.002 * Looking for test storage... 00:14:52.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.002 --rc genhtml_branch_coverage=1 00:14:52.002 --rc genhtml_function_coverage=1 00:14:52.002 --rc genhtml_legend=1 00:14:52.002 --rc geninfo_all_blocks=1 00:14:52.002 --rc geninfo_unexecuted_blocks=1 00:14:52.002 00:14:52.002 ' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.002 --rc genhtml_branch_coverage=1 00:14:52.002 --rc genhtml_function_coverage=1 00:14:52.002 --rc genhtml_legend=1 00:14:52.002 --rc geninfo_all_blocks=1 00:14:52.002 --rc geninfo_unexecuted_blocks=1 00:14:52.002 00:14:52.002 ' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.002 --rc genhtml_branch_coverage=1 00:14:52.002 --rc genhtml_function_coverage=1 00:14:52.002 --rc genhtml_legend=1 00:14:52.002 --rc geninfo_all_blocks=1 00:14:52.002 --rc geninfo_unexecuted_blocks=1 00:14:52.002 00:14:52.002 ' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.002 --rc genhtml_branch_coverage=1 00:14:52.002 --rc genhtml_function_coverage=1 00:14:52.002 --rc genhtml_legend=1 00:14:52.002 --rc geninfo_all_blocks=1 00:14:52.002 --rc geninfo_unexecuted_blocks=1 00:14:52.002 00:14:52.002 ' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:52.002 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:14:52.002 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:52.003 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:52.266 #define SPDK_CONFIG_H 00:14:52.266 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:52.266 #define SPDK_CONFIG_APPS 1 00:14:52.266 #define SPDK_CONFIG_ARCH native 00:14:52.266 #undef SPDK_CONFIG_ASAN 00:14:52.266 #undef SPDK_CONFIG_AVAHI 00:14:52.266 #undef SPDK_CONFIG_CET 00:14:52.266 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:52.266 #define SPDK_CONFIG_COVERAGE 1 00:14:52.266 #define SPDK_CONFIG_CROSS_PREFIX 00:14:52.266 #undef SPDK_CONFIG_CRYPTO 00:14:52.266 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:52.266 #undef SPDK_CONFIG_CUSTOMOCF 00:14:52.266 #undef SPDK_CONFIG_DAOS 00:14:52.266 #define SPDK_CONFIG_DAOS_DIR 00:14:52.266 #define SPDK_CONFIG_DEBUG 1 00:14:52.266 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:52.266 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:52.266 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:52.266 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:52.266 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:52.266 #undef SPDK_CONFIG_DPDK_UADK 00:14:52.266 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:52.266 #define SPDK_CONFIG_EXAMPLES 1 00:14:52.266 #undef SPDK_CONFIG_FC 00:14:52.266 #define SPDK_CONFIG_FC_PATH 00:14:52.266 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:52.266 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:52.266 #define SPDK_CONFIG_FSDEV 1 00:14:52.266 #undef SPDK_CONFIG_FUSE 00:14:52.266 #undef SPDK_CONFIG_FUZZER 00:14:52.266 #define SPDK_CONFIG_FUZZER_LIB 00:14:52.266 #undef SPDK_CONFIG_GOLANG 00:14:52.266 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:52.266 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:52.266 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:52.266 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:52.266 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:52.266 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:52.266 #undef SPDK_CONFIG_HAVE_LZ4 00:14:52.266 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:52.266 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:52.266 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:52.266 #define SPDK_CONFIG_IDXD 1 00:14:52.266 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:52.266 #undef SPDK_CONFIG_IPSEC_MB 00:14:52.266 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:52.266 #define SPDK_CONFIG_ISAL 1 00:14:52.266 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:52.266 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:52.266 #define SPDK_CONFIG_LIBDIR 00:14:52.266 #undef SPDK_CONFIG_LTO 00:14:52.266 #define SPDK_CONFIG_MAX_LCORES 128 00:14:52.266 #define SPDK_CONFIG_NVME_CUSE 1 00:14:52.266 #undef SPDK_CONFIG_OCF 00:14:52.266 #define SPDK_CONFIG_OCF_PATH 00:14:52.266 #define SPDK_CONFIG_OPENSSL_PATH 00:14:52.266 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:52.266 #define SPDK_CONFIG_PGO_DIR 00:14:52.266 #undef SPDK_CONFIG_PGO_USE 00:14:52.266 #define SPDK_CONFIG_PREFIX /usr/local 00:14:52.266 #undef SPDK_CONFIG_RAID5F 00:14:52.266 #undef SPDK_CONFIG_RBD 00:14:52.266 #define SPDK_CONFIG_RDMA 1 00:14:52.266 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:52.266 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:52.266 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:52.266 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:52.266 #define SPDK_CONFIG_SHARED 1 00:14:52.266 #undef SPDK_CONFIG_SMA 00:14:52.266 #define SPDK_CONFIG_TESTS 1 00:14:52.266 #undef SPDK_CONFIG_TSAN 00:14:52.266 #define SPDK_CONFIG_UBLK 1 00:14:52.266 #define SPDK_CONFIG_UBSAN 1 00:14:52.266 #undef SPDK_CONFIG_UNIT_TESTS 00:14:52.266 #undef SPDK_CONFIG_URING 00:14:52.266 #define SPDK_CONFIG_URING_PATH 00:14:52.266 #undef SPDK_CONFIG_URING_ZNS 00:14:52.266 #undef SPDK_CONFIG_USDT 00:14:52.266 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:52.266 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:52.266 #define SPDK_CONFIG_VFIO_USER 1 00:14:52.266 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:52.266 #define SPDK_CONFIG_VHOST 1 00:14:52.266 #define SPDK_CONFIG_VIRTIO 1 00:14:52.266 #undef SPDK_CONFIG_VTUNE 00:14:52.266 #define SPDK_CONFIG_VTUNE_DIR 00:14:52.266 #define SPDK_CONFIG_WERROR 1 00:14:52.266 #define SPDK_CONFIG_WPDK_DIR 00:14:52.266 #undef SPDK_CONFIG_XNVME 00:14:52.266 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:52.266 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:52.267 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 584962 ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 584962 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.QdTmfj 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QdTmfj/tests/target /tmp/spdk.QdTmfj 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=678309888 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606119936 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122972856320 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356562432 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6383706112 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668250112 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847898112 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:14:52.268 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677867520 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=413696 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:52.269 * Looking for test storage... 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122972856320 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8598298624 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:52.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.269 --rc genhtml_branch_coverage=1 00:14:52.269 --rc genhtml_function_coverage=1 00:14:52.269 --rc genhtml_legend=1 00:14:52.269 --rc geninfo_all_blocks=1 00:14:52.269 --rc geninfo_unexecuted_blocks=1 00:14:52.269 00:14:52.269 ' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:52.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.269 --rc genhtml_branch_coverage=1 00:14:52.269 --rc genhtml_function_coverage=1 00:14:52.269 --rc genhtml_legend=1 00:14:52.269 --rc geninfo_all_blocks=1 00:14:52.269 --rc geninfo_unexecuted_blocks=1 00:14:52.269 00:14:52.269 ' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:52.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.269 --rc genhtml_branch_coverage=1 00:14:52.269 --rc genhtml_function_coverage=1 00:14:52.269 --rc genhtml_legend=1 00:14:52.269 --rc geninfo_all_blocks=1 00:14:52.269 --rc geninfo_unexecuted_blocks=1 00:14:52.269 00:14:52.269 ' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:52.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.269 --rc genhtml_branch_coverage=1 00:14:52.269 --rc genhtml_function_coverage=1 00:14:52.269 --rc genhtml_legend=1 00:14:52.269 --rc geninfo_all_blocks=1 00:14:52.269 --rc geninfo_unexecuted_blocks=1 00:14:52.269 00:14:52.269 ' 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.269 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.270 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.270 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.531 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.532 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:00.691 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:00.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:00.691 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:00.692 Found net devices under 0000:31:00.0: cvl_0_0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:00.692 Found net devices under 0000:31:00.1: cvl_0_1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:00.692 00:15:00.692 --- 10.0.0.2 ping statistics --- 00:15:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.692 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:00.692 00:15:00.692 --- 10.0.0.1 ping statistics --- 00:15:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.692 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.692 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:00.692 ************************************ 00:15:00.692 START TEST nvmf_filesystem_no_in_capsule 00:15:00.692 ************************************ 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=588956 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 588956 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 588956 ']' 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.692 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.692 [2024-09-30 22:43:27.103653] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:15:00.692 [2024-09-30 22:43:27.103714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.692 [2024-09-30 22:43:27.195389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.692 [2024-09-30 22:43:27.291513] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.692 [2024-09-30 22:43:27.291577] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.692 [2024-09-30 22:43:27.291586] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.692 [2024-09-30 22:43:27.291593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.692 [2024-09-30 22:43:27.291600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.692 [2024-09-30 22:43:27.291766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.692 [2024-09-30 22:43:27.291946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.692 [2024-09-30 22:43:27.292053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.692 [2024-09-30 22:43:27.292055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.954 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.954 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:00.954 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:00.954 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.954 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 [2024-09-30 22:43:27.981504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 Malloc1 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 [2024-09-30 22:43:28.136178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:01.216 { 00:15:01.216 "name": "Malloc1", 00:15:01.216 "aliases": [ 00:15:01.216 "dc96abd7-19cd-4df7-b198-75cfd5424bbc" 00:15:01.216 ], 00:15:01.216 "product_name": "Malloc disk", 00:15:01.216 "block_size": 512, 00:15:01.216 "num_blocks": 1048576, 00:15:01.216 "uuid": "dc96abd7-19cd-4df7-b198-75cfd5424bbc", 00:15:01.216 "assigned_rate_limits": { 00:15:01.216 "rw_ios_per_sec": 0, 00:15:01.216 "rw_mbytes_per_sec": 0, 00:15:01.216 "r_mbytes_per_sec": 0, 00:15:01.216 "w_mbytes_per_sec": 0 00:15:01.216 }, 00:15:01.216 "claimed": true, 00:15:01.216 "claim_type": "exclusive_write", 00:15:01.216 "zoned": false, 00:15:01.216 "supported_io_types": { 00:15:01.216 "read": true, 00:15:01.216 "write": true, 00:15:01.216 "unmap": true, 00:15:01.216 "flush": true, 00:15:01.216 "reset": true, 00:15:01.216 "nvme_admin": false, 00:15:01.216 "nvme_io": false, 00:15:01.216 "nvme_io_md": false, 00:15:01.216 "write_zeroes": true, 00:15:01.216 "zcopy": true, 00:15:01.216 "get_zone_info": false, 00:15:01.216 "zone_management": false, 00:15:01.216 "zone_append": false, 00:15:01.216 "compare": false, 00:15:01.216 "compare_and_write": false, 00:15:01.216 "abort": true, 00:15:01.216 "seek_hole": false, 00:15:01.216 "seek_data": false, 00:15:01.216 "copy": true, 00:15:01.216 "nvme_iov_md": false 00:15:01.216 }, 00:15:01.216 "memory_domains": [ 00:15:01.216 { 00:15:01.216 "dma_device_id": "system", 00:15:01.216 "dma_device_type": 1 00:15:01.216 }, 00:15:01.216 { 00:15:01.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.216 "dma_device_type": 2 00:15:01.216 } 00:15:01.216 ], 00:15:01.216 "driver_specific": {} 00:15:01.216 } 00:15:01.216 ]' 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:01.216 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:01.477 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:01.477 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:01.477 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:01.477 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:01.477 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.862 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.862 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.862 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.862 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:02.862 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:04.773 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:05.034 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:05.294 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:05.865 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:06.806 ************************************ 00:15:06.806 START TEST filesystem_ext4 00:15:06.806 ************************************ 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:15:06.806 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:06.807 mke2fs 1.47.0 (5-Feb-2023) 00:15:06.807 Discarding device blocks: 0/522240 done 00:15:06.807 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:06.807 Filesystem UUID: 938f3ac0-e284-45ba-ac0a-8a55fc2d48ac 00:15:06.807 Superblock backups stored on blocks: 00:15:06.807 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:06.807 00:15:06.807 Allocating group tables: 0/64 done 00:15:06.807 Writing inode tables: 0/64 done 00:15:10.108 Creating journal (8192 blocks): done 00:15:10.108 Writing superblocks and filesystem accounting information: 0/64 done 00:15:10.108 00:15:10.108 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:10.109 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:15.534 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:15.534 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 588956 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:15.534 00:15:15.534 real 0m8.439s 00:15:15.534 user 0m0.031s 00:15:15.534 sys 0m0.079s 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:15.534 ************************************ 00:15:15.534 END TEST filesystem_ext4 00:15:15.534 ************************************ 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.534 ************************************ 00:15:15.534 START TEST filesystem_btrfs 00:15:15.534 ************************************ 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:15.534 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:15.535 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:15.535 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:15.535 btrfs-progs v6.8.1 00:15:15.535 See https://btrfs.readthedocs.io for more information. 00:15:15.535 00:15:15.535 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:15.535 NOTE: several default settings have changed in version 5.15, please make sure 00:15:15.535 this does not affect your deployments: 00:15:15.535 - DUP for metadata (-m dup) 00:15:15.535 - enabled no-holes (-O no-holes) 00:15:15.535 - enabled free-space-tree (-R free-space-tree) 00:15:15.535 00:15:15.535 Label: (null) 00:15:15.535 UUID: c9fbcfd8-c590-4dbd-85ab-b6fa01077cf0 00:15:15.535 Node size: 16384 00:15:15.535 Sector size: 4096 (CPU page size: 4096) 00:15:15.535 Filesystem size: 510.00MiB 00:15:15.535 Block group profiles: 00:15:15.535 Data: single 8.00MiB 00:15:15.535 Metadata: DUP 32.00MiB 00:15:15.535 System: DUP 8.00MiB 00:15:15.535 SSD detected: yes 00:15:15.535 Zoned device: no 00:15:15.535 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:15.535 Checksum: crc32c 00:15:15.535 Number of devices: 1 00:15:15.535 Devices: 00:15:15.535 ID SIZE PATH 00:15:15.535 1 510.00MiB /dev/nvme0n1p1 00:15:15.535 00:15:15.535 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:15.535 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 588956 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:15.796 00:15:15.796 real 0m0.561s 00:15:15.796 user 0m0.030s 00:15:15.796 sys 0m0.119s 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:15.796 ************************************ 00:15:15.796 END TEST filesystem_btrfs 00:15:15.796 ************************************ 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.796 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.796 ************************************ 00:15:15.796 START TEST filesystem_xfs 00:15:15.797 ************************************ 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:15.797 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:16.057 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:16.057 = sectsz=512 attr=2, projid32bit=1 00:15:16.057 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:16.057 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:16.057 data = bsize=4096 blocks=130560, imaxpct=25 00:15:16.057 = sunit=0 swidth=0 blks 00:15:16.057 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:16.057 log =internal log bsize=4096 blocks=16384, version=2 00:15:16.057 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:16.057 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:16.999 Discarding blocks...Done. 00:15:16.999 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:16.999 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 588956 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:19.543 00:15:19.543 real 0m3.712s 00:15:19.543 user 0m0.032s 00:15:19.543 sys 0m0.075s 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:19.543 ************************************ 00:15:19.543 END TEST filesystem_xfs 00:15:19.543 ************************************ 00:15:19.543 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.114 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.114 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 588956 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 588956 ']' 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 588956 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 588956 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 588956' 00:15:20.115 killing process with pid 588956 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 588956 00:15:20.115 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 588956 00:15:20.375 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:20.375 00:15:20.375 real 0m20.286s 00:15:20.375 user 1m20.017s 00:15:20.375 sys 0m1.484s 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.376 ************************************ 00:15:20.376 END TEST nvmf_filesystem_no_in_capsule 00:15:20.376 ************************************ 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.376 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.636 ************************************ 00:15:20.636 START TEST nvmf_filesystem_in_capsule 00:15:20.636 ************************************ 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=593219 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 593219 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 593219 ']' 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.636 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.636 [2024-09-30 22:43:47.467172] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:15:20.636 [2024-09-30 22:43:47.467220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.636 [2024-09-30 22:43:47.548789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.636 [2024-09-30 22:43:47.604803] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.636 [2024-09-30 22:43:47.604839] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.636 [2024-09-30 22:43:47.604845] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.636 [2024-09-30 22:43:47.604849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.636 [2024-09-30 22:43:47.604853] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.636 [2024-09-30 22:43:47.604940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.636 [2024-09-30 22:43:47.605030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.636 [2024-09-30 22:43:47.605182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.636 [2024-09-30 22:43:47.605185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 [2024-09-30 22:43:48.308425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 [2024-09-30 22:43:48.432770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:21.575 { 00:15:21.575 "name": "Malloc1", 00:15:21.575 "aliases": [ 00:15:21.575 "5d9a47ec-cfff-4627-b26b-bc0e9e208720" 00:15:21.575 ], 00:15:21.575 "product_name": "Malloc disk", 00:15:21.575 "block_size": 512, 00:15:21.575 "num_blocks": 1048576, 00:15:21.575 "uuid": "5d9a47ec-cfff-4627-b26b-bc0e9e208720", 00:15:21.575 "assigned_rate_limits": { 00:15:21.575 "rw_ios_per_sec": 0, 00:15:21.575 "rw_mbytes_per_sec": 0, 00:15:21.575 "r_mbytes_per_sec": 0, 00:15:21.575 "w_mbytes_per_sec": 0 00:15:21.575 }, 00:15:21.575 "claimed": true, 00:15:21.575 "claim_type": "exclusive_write", 00:15:21.575 "zoned": false, 00:15:21.575 "supported_io_types": { 00:15:21.575 "read": true, 00:15:21.575 "write": true, 00:15:21.575 "unmap": true, 00:15:21.575 "flush": true, 00:15:21.575 "reset": true, 00:15:21.575 "nvme_admin": false, 00:15:21.575 "nvme_io": false, 00:15:21.575 "nvme_io_md": false, 00:15:21.575 "write_zeroes": true, 00:15:21.575 "zcopy": true, 00:15:21.575 "get_zone_info": false, 00:15:21.575 "zone_management": false, 00:15:21.575 "zone_append": false, 00:15:21.575 "compare": false, 00:15:21.575 "compare_and_write": false, 00:15:21.575 "abort": true, 00:15:21.575 "seek_hole": false, 00:15:21.575 "seek_data": false, 00:15:21.575 "copy": true, 00:15:21.575 "nvme_iov_md": false 00:15:21.575 }, 00:15:21.575 "memory_domains": [ 00:15:21.575 { 00:15:21.575 "dma_device_id": "system", 00:15:21.575 "dma_device_type": 1 00:15:21.575 }, 00:15:21.575 { 00:15:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.575 "dma_device_type": 2 00:15:21.575 } 00:15:21.575 ], 00:15:21.575 "driver_specific": {} 00:15:21.575 } 00:15:21.575 ]' 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:21.575 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.518 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.518 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.518 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.518 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:23.518 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:25.430 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:25.431 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:25.691 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:26.261 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:27.204 ************************************ 00:15:27.204 START TEST filesystem_in_capsule_ext4 00:15:27.204 ************************************ 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:15:27.204 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:27.204 mke2fs 1.47.0 (5-Feb-2023) 00:15:27.204 Discarding device blocks: 0/522240 done 00:15:27.204 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:27.204 Filesystem UUID: 83a1ef3f-977e-4f7b-b1e2-95e8f88224f2 00:15:27.204 Superblock backups stored on blocks: 00:15:27.204 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:27.204 00:15:27.204 Allocating group tables: 0/64 done 00:15:27.204 Writing inode tables: 0/64 done 00:15:27.465 Creating journal (8192 blocks): done 00:15:29.679 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:15:29.679 00:15:29.679 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:29.679 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 593219 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:36.257 00:15:36.257 real 0m8.058s 00:15:36.257 user 0m0.022s 00:15:36.257 sys 0m0.084s 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:36.257 ************************************ 00:15:36.257 END TEST filesystem_in_capsule_ext4 00:15:36.257 ************************************ 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.257 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.258 ************************************ 00:15:36.258 START TEST filesystem_in_capsule_btrfs 00:15:36.258 ************************************ 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:36.258 btrfs-progs v6.8.1 00:15:36.258 See https://btrfs.readthedocs.io for more information. 00:15:36.258 00:15:36.258 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:36.258 NOTE: several default settings have changed in version 5.15, please make sure 00:15:36.258 this does not affect your deployments: 00:15:36.258 - DUP for metadata (-m dup) 00:15:36.258 - enabled no-holes (-O no-holes) 00:15:36.258 - enabled free-space-tree (-R free-space-tree) 00:15:36.258 00:15:36.258 Label: (null) 00:15:36.258 UUID: 4469a85b-a909-4c0d-bad5-a74be1f6ddab 00:15:36.258 Node size: 16384 00:15:36.258 Sector size: 4096 (CPU page size: 4096) 00:15:36.258 Filesystem size: 510.00MiB 00:15:36.258 Block group profiles: 00:15:36.258 Data: single 8.00MiB 00:15:36.258 Metadata: DUP 32.00MiB 00:15:36.258 System: DUP 8.00MiB 00:15:36.258 SSD detected: yes 00:15:36.258 Zoned device: no 00:15:36.258 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:36.258 Checksum: crc32c 00:15:36.258 Number of devices: 1 00:15:36.258 Devices: 00:15:36.258 ID SIZE PATH 00:15:36.258 1 510.00MiB /dev/nvme0n1p1 00:15:36.258 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:36.258 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 593219 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:36.258 00:15:36.258 real 0m0.923s 00:15:36.258 user 0m0.039s 00:15:36.258 sys 0m0.112s 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:36.258 ************************************ 00:15:36.258 END TEST filesystem_in_capsule_btrfs 00:15:36.258 ************************************ 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.258 ************************************ 00:15:36.258 START TEST filesystem_in_capsule_xfs 00:15:36.258 ************************************ 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:36.258 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:36.258 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:36.258 = sectsz=512 attr=2, projid32bit=1 00:15:36.258 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:36.258 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:36.258 data = bsize=4096 blocks=130560, imaxpct=25 00:15:36.258 = sunit=0 swidth=0 blks 00:15:36.258 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:36.258 log =internal log bsize=4096 blocks=16384, version=2 00:15:36.258 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:36.258 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:37.200 Discarding blocks...Done. 00:15:37.200 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:37.200 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 593219 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:39.742 00:15:39.742 real 0m3.487s 00:15:39.742 user 0m0.033s 00:15:39.742 sys 0m0.074s 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 ************************************ 00:15:39.742 END TEST filesystem_in_capsule_xfs 00:15:39.742 ************************************ 00:15:39.742 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 593219 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 593219 ']' 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 593219 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 593219 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 593219' 00:15:40.312 killing process with pid 593219 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 593219 00:15:40.312 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 593219 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:40.572 00:15:40.572 real 0m20.123s 00:15:40.572 user 1m19.599s 00:15:40.572 sys 0m1.380s 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:40.572 ************************************ 00:15:40.572 END TEST nvmf_filesystem_in_capsule 00:15:40.572 ************************************ 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.572 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.572 rmmod nvme_tcp 00:15:40.833 rmmod nvme_fabrics 00:15:40.833 rmmod nvme_keyring 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.833 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.748 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.748 00:15:42.748 real 0m50.924s 00:15:42.748 user 2m42.019s 00:15:42.748 sys 0m8.911s 00:15:42.748 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.748 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:42.748 ************************************ 00:15:42.748 END TEST nvmf_filesystem 00:15:42.748 ************************************ 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.009 ************************************ 00:15:43.009 START TEST nvmf_target_discovery 00:15:43.009 ************************************ 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:43.009 * Looking for test storage... 00:15:43.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.009 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.009 --rc genhtml_branch_coverage=1 00:15:43.009 --rc genhtml_function_coverage=1 00:15:43.009 --rc genhtml_legend=1 00:15:43.009 --rc geninfo_all_blocks=1 00:15:43.009 --rc geninfo_unexecuted_blocks=1 00:15:43.009 00:15:43.009 ' 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.009 --rc genhtml_branch_coverage=1 00:15:43.009 --rc genhtml_function_coverage=1 00:15:43.009 --rc genhtml_legend=1 00:15:43.009 --rc geninfo_all_blocks=1 00:15:43.009 --rc geninfo_unexecuted_blocks=1 00:15:43.009 00:15:43.009 ' 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.009 --rc genhtml_branch_coverage=1 00:15:43.009 --rc genhtml_function_coverage=1 00:15:43.009 --rc genhtml_legend=1 00:15:43.009 --rc geninfo_all_blocks=1 00:15:43.009 --rc geninfo_unexecuted_blocks=1 00:15:43.009 00:15:43.009 ' 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.009 --rc genhtml_branch_coverage=1 00:15:43.009 --rc genhtml_function_coverage=1 00:15:43.009 --rc genhtml_legend=1 00:15:43.009 --rc geninfo_all_blocks=1 00:15:43.009 --rc geninfo_unexecuted_blocks=1 00:15:43.009 00:15:43.009 ' 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.009 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:43.270 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.271 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:51.424 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:51.424 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:51.424 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:51.425 Found net devices under 0000:31:00.0: cvl_0_0 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:51.425 Found net devices under 0000:31:00.1: cvl_0_1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:15:51.425 00:15:51.425 --- 10.0.0.2 ping statistics --- 00:15:51.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.425 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:15:51.425 00:15:51.425 --- 10.0.0.1 ping statistics --- 00:15:51.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.425 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=601531 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 601531 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 601531 ']' 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.425 [2024-09-30 22:44:17.800404] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:15:51.425 [2024-09-30 22:44:17.800473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.425 [2024-09-30 22:44:17.890498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.425 [2024-09-30 22:44:17.987054] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.425 [2024-09-30 22:44:17.987114] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.425 [2024-09-30 22:44:17.987124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.425 [2024-09-30 22:44:17.987131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.425 [2024-09-30 22:44:17.987138] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.425 [2024-09-30 22:44:17.987305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.425 [2024-09-30 22:44:17.987405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.425 [2024-09-30 22:44:17.987531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.425 [2024-09-30 22:44:17.987531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 [2024-09-30 22:44:18.681048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.687 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 Null1 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 [2024-09-30 22:44:18.741665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 Null2 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.949 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 Null3 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 Null4 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.950 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:52.212 00:15:52.212 Discovery Log Number of Records 6, Generation counter 6 00:15:52.212 =====Discovery Log Entry 0====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: current discovery subsystem 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4420 00:15:52.212 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: explicit discovery connections, duplicate discovery information 00:15:52.212 sectype: none 00:15:52.212 =====Discovery Log Entry 1====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: nvme subsystem 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4420 00:15:52.212 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: none 00:15:52.212 sectype: none 00:15:52.212 =====Discovery Log Entry 2====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: nvme subsystem 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4420 00:15:52.212 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: none 00:15:52.212 sectype: none 00:15:52.212 =====Discovery Log Entry 3====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: nvme subsystem 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4420 00:15:52.212 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: none 00:15:52.212 sectype: none 00:15:52.212 =====Discovery Log Entry 4====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: nvme subsystem 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4420 00:15:52.212 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: none 00:15:52.212 sectype: none 00:15:52.212 =====Discovery Log Entry 5====== 00:15:52.212 trtype: tcp 00:15:52.212 adrfam: ipv4 00:15:52.212 subtype: discovery subsystem referral 00:15:52.212 treq: not required 00:15:52.212 portid: 0 00:15:52.212 trsvcid: 4430 00:15:52.212 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:52.212 traddr: 10.0.0.2 00:15:52.212 eflags: none 00:15:52.212 sectype: none 00:15:52.212 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:52.212 Perform nvmf subsystem discovery via RPC 00:15:52.212 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:52.212 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.212 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.212 [ 00:15:52.212 { 00:15:52.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.212 "subtype": "Discovery", 00:15:52.212 "listen_addresses": [ 00:15:52.212 { 00:15:52.212 "trtype": "TCP", 00:15:52.212 "adrfam": "IPv4", 00:15:52.212 "traddr": "10.0.0.2", 00:15:52.212 "trsvcid": "4420" 00:15:52.212 } 00:15:52.212 ], 00:15:52.212 "allow_any_host": true, 00:15:52.212 "hosts": [] 00:15:52.212 }, 00:15:52.212 { 00:15:52.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.212 "subtype": "NVMe", 00:15:52.212 "listen_addresses": [ 00:15:52.212 { 00:15:52.212 "trtype": "TCP", 00:15:52.212 "adrfam": "IPv4", 00:15:52.212 "traddr": "10.0.0.2", 00:15:52.212 "trsvcid": "4420" 00:15:52.212 } 00:15:52.212 ], 00:15:52.212 "allow_any_host": true, 00:15:52.212 "hosts": [], 00:15:52.212 "serial_number": "SPDK00000000000001", 00:15:52.212 "model_number": "SPDK bdev Controller", 00:15:52.212 "max_namespaces": 32, 00:15:52.212 "min_cntlid": 1, 00:15:52.212 "max_cntlid": 65519, 00:15:52.212 "namespaces": [ 00:15:52.212 { 00:15:52.212 "nsid": 1, 00:15:52.212 "bdev_name": "Null1", 00:15:52.212 "name": "Null1", 00:15:52.212 "nguid": "4A5C70FEAAD64BE6A7CF400A5389CCEE", 00:15:52.212 "uuid": "4a5c70fe-aad6-4be6-a7cf-400a5389ccee" 00:15:52.212 } 00:15:52.212 ] 00:15:52.212 }, 00:15:52.212 { 00:15:52.213 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:52.213 "subtype": "NVMe", 00:15:52.213 "listen_addresses": [ 00:15:52.213 { 00:15:52.213 "trtype": "TCP", 00:15:52.213 "adrfam": "IPv4", 00:15:52.213 "traddr": "10.0.0.2", 00:15:52.213 "trsvcid": "4420" 00:15:52.213 } 00:15:52.213 ], 00:15:52.213 "allow_any_host": true, 00:15:52.213 "hosts": [], 00:15:52.213 "serial_number": "SPDK00000000000002", 00:15:52.213 "model_number": "SPDK bdev Controller", 00:15:52.213 "max_namespaces": 32, 00:15:52.213 "min_cntlid": 1, 00:15:52.213 "max_cntlid": 65519, 00:15:52.213 "namespaces": [ 00:15:52.213 { 00:15:52.213 "nsid": 1, 00:15:52.213 "bdev_name": "Null2", 00:15:52.213 "name": "Null2", 00:15:52.213 "nguid": "F76F42CAE9374965A7DCA918F6C55C27", 00:15:52.213 "uuid": "f76f42ca-e937-4965-a7dc-a918f6c55c27" 00:15:52.213 } 00:15:52.213 ] 00:15:52.213 }, 00:15:52.213 { 00:15:52.213 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:52.213 "subtype": "NVMe", 00:15:52.213 "listen_addresses": [ 00:15:52.213 { 00:15:52.213 "trtype": "TCP", 00:15:52.213 "adrfam": "IPv4", 00:15:52.213 "traddr": "10.0.0.2", 00:15:52.213 "trsvcid": "4420" 00:15:52.213 } 00:15:52.213 ], 00:15:52.213 "allow_any_host": true, 00:15:52.213 "hosts": [], 00:15:52.213 "serial_number": "SPDK00000000000003", 00:15:52.213 "model_number": "SPDK bdev Controller", 00:15:52.213 "max_namespaces": 32, 00:15:52.213 "min_cntlid": 1, 00:15:52.213 "max_cntlid": 65519, 00:15:52.213 "namespaces": [ 00:15:52.213 { 00:15:52.213 "nsid": 1, 00:15:52.213 "bdev_name": "Null3", 00:15:52.213 "name": "Null3", 00:15:52.213 "nguid": "B692839E7FA84A3E90704529F040ED11", 00:15:52.213 "uuid": "b692839e-7fa8-4a3e-9070-4529f040ed11" 00:15:52.213 } 00:15:52.213 ] 00:15:52.213 }, 00:15:52.213 { 00:15:52.213 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:52.213 "subtype": "NVMe", 00:15:52.213 "listen_addresses": [ 00:15:52.213 { 00:15:52.213 "trtype": "TCP", 00:15:52.213 "adrfam": "IPv4", 00:15:52.213 "traddr": "10.0.0.2", 00:15:52.213 "trsvcid": "4420" 00:15:52.213 } 00:15:52.213 ], 00:15:52.213 "allow_any_host": true, 00:15:52.213 "hosts": [], 00:15:52.213 "serial_number": "SPDK00000000000004", 00:15:52.213 "model_number": "SPDK bdev Controller", 00:15:52.213 "max_namespaces": 32, 00:15:52.213 "min_cntlid": 1, 00:15:52.213 "max_cntlid": 65519, 00:15:52.213 "namespaces": [ 00:15:52.213 { 00:15:52.213 "nsid": 1, 00:15:52.213 "bdev_name": "Null4", 00:15:52.213 "name": "Null4", 00:15:52.213 "nguid": "7717EA6C30F643688FCBB1B3624D0F15", 00:15:52.213 "uuid": "7717ea6c-30f6-4368-8fcb-b1b3624d0f15" 00:15:52.213 } 00:15:52.213 ] 00:15:52.213 } 00:15:52.213 ] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.474 rmmod nvme_tcp 00:15:52.474 rmmod nvme_fabrics 00:15:52.474 rmmod nvme_keyring 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 601531 ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 601531 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 601531 ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 601531 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 601531 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 601531' 00:15:52.474 killing process with pid 601531 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 601531 00:15:52.474 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 601531 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.735 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.650 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.650 00:15:54.650 real 0m11.832s 00:15:54.650 user 0m8.654s 00:15:54.650 sys 0m6.234s 00:15:54.650 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.650 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.650 ************************************ 00:15:54.650 END TEST nvmf_target_discovery 00:15:54.650 ************************************ 00:15:54.911 22:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 ************************************ 00:15:54.912 START TEST nvmf_referrals 00:15:54.912 ************************************ 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:54.912 * Looking for test storage... 00:15:54.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.912 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.174 --rc genhtml_branch_coverage=1 00:15:55.174 --rc genhtml_function_coverage=1 00:15:55.174 --rc genhtml_legend=1 00:15:55.174 --rc geninfo_all_blocks=1 00:15:55.174 --rc geninfo_unexecuted_blocks=1 00:15:55.174 00:15:55.174 ' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.174 --rc genhtml_branch_coverage=1 00:15:55.174 --rc genhtml_function_coverage=1 00:15:55.174 --rc genhtml_legend=1 00:15:55.174 --rc geninfo_all_blocks=1 00:15:55.174 --rc geninfo_unexecuted_blocks=1 00:15:55.174 00:15:55.174 ' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.174 --rc genhtml_branch_coverage=1 00:15:55.174 --rc genhtml_function_coverage=1 00:15:55.174 --rc genhtml_legend=1 00:15:55.174 --rc geninfo_all_blocks=1 00:15:55.174 --rc geninfo_unexecuted_blocks=1 00:15:55.174 00:15:55.174 ' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.174 --rc genhtml_branch_coverage=1 00:15:55.174 --rc genhtml_function_coverage=1 00:15:55.174 --rc genhtml_legend=1 00:15:55.174 --rc geninfo_all_blocks=1 00:15:55.174 --rc geninfo_unexecuted_blocks=1 00:15:55.174 00:15:55.174 ' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.174 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:55.175 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:03.320 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:03.320 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:03.320 Found net devices under 0000:31:00.0: cvl_0_0 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:03.320 Found net devices under 0000:31:00.1: cvl_0_1 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.320 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:16:03.321 00:16:03.321 --- 10.0.0.2 ping statistics --- 00:16:03.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.321 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:16:03.321 00:16:03.321 --- 10.0.0.1 ping statistics --- 00:16:03.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.321 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=606288 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 606288 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 606288 ']' 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.321 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.321 [2024-09-30 22:44:29.753558] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:16:03.321 [2024-09-30 22:44:29.753623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.321 [2024-09-30 22:44:29.845093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.321 [2024-09-30 22:44:29.941136] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.321 [2024-09-30 22:44:29.941196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.321 [2024-09-30 22:44:29.941205] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.321 [2024-09-30 22:44:29.941213] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.321 [2024-09-30 22:44:29.941219] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.321 [2024-09-30 22:44:29.941390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.321 [2024-09-30 22:44:29.941551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.321 [2024-09-30 22:44:29.941708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.321 [2024-09-30 22:44:29.941709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.582 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.582 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:16:03.582 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:03.582 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.582 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 [2024-09-30 22:44:30.633816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 [2024-09-30 22:44:30.650164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:03.844 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:03.845 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.106 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:04.106 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:04.106 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:04.106 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:04.107 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:04.107 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:04.107 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:04.367 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:04.629 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:04.889 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:05.151 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:05.151 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:05.412 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:05.672 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.932 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.932 rmmod nvme_tcp 00:16:05.932 rmmod nvme_fabrics 00:16:05.932 rmmod nvme_keyring 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 606288 ']' 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 606288 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 606288 ']' 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 606288 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 606288 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 606288' 00:16:05.933 killing process with pid 606288 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 606288 00:16:05.933 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 606288 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.193 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.105 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:08.105 00:16:08.105 real 0m13.388s 00:16:08.105 user 0m15.551s 00:16:08.105 sys 0m6.655s 00:16:08.105 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:08.366 ************************************ 00:16:08.366 END TEST nvmf_referrals 00:16:08.366 ************************************ 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.366 ************************************ 00:16:08.366 START TEST nvmf_connect_disconnect 00:16:08.366 ************************************ 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:08.366 * Looking for test storage... 00:16:08.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:16:08.366 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:08.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.627 --rc genhtml_branch_coverage=1 00:16:08.627 --rc genhtml_function_coverage=1 00:16:08.627 --rc genhtml_legend=1 00:16:08.627 --rc geninfo_all_blocks=1 00:16:08.627 --rc geninfo_unexecuted_blocks=1 00:16:08.627 00:16:08.627 ' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:08.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.627 --rc genhtml_branch_coverage=1 00:16:08.627 --rc genhtml_function_coverage=1 00:16:08.627 --rc genhtml_legend=1 00:16:08.627 --rc geninfo_all_blocks=1 00:16:08.627 --rc geninfo_unexecuted_blocks=1 00:16:08.627 00:16:08.627 ' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:08.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.627 --rc genhtml_branch_coverage=1 00:16:08.627 --rc genhtml_function_coverage=1 00:16:08.627 --rc genhtml_legend=1 00:16:08.627 --rc geninfo_all_blocks=1 00:16:08.627 --rc geninfo_unexecuted_blocks=1 00:16:08.627 00:16:08.627 ' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:08.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.627 --rc genhtml_branch_coverage=1 00:16:08.627 --rc genhtml_function_coverage=1 00:16:08.627 --rc genhtml_legend=1 00:16:08.627 --rc geninfo_all_blocks=1 00:16:08.627 --rc geninfo_unexecuted_blocks=1 00:16:08.627 00:16:08.627 ' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.627 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:08.628 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.771 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:16.772 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:16.772 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:16.772 Found net devices under 0000:31:00.0: cvl_0_0 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:16.772 Found net devices under 0000:31:00.1: cvl_0_1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.772 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:16:16.772 00:16:16.772 --- 10.0.0.2 ping statistics --- 00:16:16.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.772 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:16:16.772 00:16:16.772 --- 10.0.0.1 ping statistics --- 00:16:16.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.772 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:16.772 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=611303 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 611303 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 611303 ']' 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.773 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:16.773 [2024-09-30 22:44:43.130356] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:16:16.773 [2024-09-30 22:44:43.130420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.773 [2024-09-30 22:44:43.223040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.773 [2024-09-30 22:44:43.320529] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.773 [2024-09-30 22:44:43.320595] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.773 [2024-09-30 22:44:43.320605] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.773 [2024-09-30 22:44:43.320612] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.773 [2024-09-30 22:44:43.320618] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.773 [2024-09-30 22:44:43.320781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.773 [2024-09-30 22:44:43.320953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.773 [2024-09-30 22:44:43.321047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.773 [2024-09-30 22:44:43.321047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.100 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.100 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:16:17.100 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:17.100 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.100 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 [2024-09-30 22:44:44.013537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.100 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 [2024-09-30 22:44:44.083499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.494 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.494 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:17.494 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:17.494 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:20.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.871 rmmod nvme_tcp 00:16:35.871 rmmod nvme_fabrics 00:16:35.871 rmmod nvme_keyring 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 611303 ']' 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 611303 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 611303 ']' 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 611303 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 611303 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 611303' 00:16:35.871 killing process with pid 611303 00:16:35.871 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 611303 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 611303 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.872 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.782 00:16:37.782 real 0m29.518s 00:16:37.782 user 1m19.182s 00:16:37.782 sys 0m7.177s 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:37.782 ************************************ 00:16:37.782 END TEST nvmf_connect_disconnect 00:16:37.782 ************************************ 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.782 22:45:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.042 ************************************ 00:16:38.042 START TEST nvmf_multitarget 00:16:38.042 ************************************ 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:38.042 * Looking for test storage... 00:16:38.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:38.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.042 --rc genhtml_branch_coverage=1 00:16:38.042 --rc genhtml_function_coverage=1 00:16:38.042 --rc genhtml_legend=1 00:16:38.042 --rc geninfo_all_blocks=1 00:16:38.042 --rc geninfo_unexecuted_blocks=1 00:16:38.042 00:16:38.042 ' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:38.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.042 --rc genhtml_branch_coverage=1 00:16:38.042 --rc genhtml_function_coverage=1 00:16:38.042 --rc genhtml_legend=1 00:16:38.042 --rc geninfo_all_blocks=1 00:16:38.042 --rc geninfo_unexecuted_blocks=1 00:16:38.042 00:16:38.042 ' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:38.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.042 --rc genhtml_branch_coverage=1 00:16:38.042 --rc genhtml_function_coverage=1 00:16:38.042 --rc genhtml_legend=1 00:16:38.042 --rc geninfo_all_blocks=1 00:16:38.042 --rc geninfo_unexecuted_blocks=1 00:16:38.042 00:16:38.042 ' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:38.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.042 --rc genhtml_branch_coverage=1 00:16:38.042 --rc genhtml_function_coverage=1 00:16:38.042 --rc genhtml_legend=1 00:16:38.042 --rc geninfo_all_blocks=1 00:16:38.042 --rc geninfo_unexecuted_blocks=1 00:16:38.042 00:16:38.042 ' 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.042 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:38.042 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.042 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.043 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:46.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:46.182 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:46.183 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:46.183 Found net devices under 0000:31:00.0: cvl_0_0 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:46.183 Found net devices under 0000:31:00.1: cvl_0_1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:16:46.183 00:16:46.183 --- 10.0.0.2 ping statistics --- 00:16:46.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.183 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:16:46.183 00:16:46.183 --- 10.0.0.1 ping statistics --- 00:16:46.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.183 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=620046 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 620046 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 620046 ']' 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.183 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.183 [2024-09-30 22:45:12.817520] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:16:46.183 [2024-09-30 22:45:12.817592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.183 [2024-09-30 22:45:12.907889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.183 [2024-09-30 22:45:13.005575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.183 [2024-09-30 22:45:13.005640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.183 [2024-09-30 22:45:13.005648] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.183 [2024-09-30 22:45:13.005656] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.183 [2024-09-30 22:45:13.005662] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.183 [2024-09-30 22:45:13.005821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.183 [2024-09-30 22:45:13.005983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.183 [2024-09-30 22:45:13.006057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.183 [2024-09-30 22:45:13.006058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:46.755 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:47.016 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:47.016 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:47.016 "nvmf_tgt_1" 00:16:47.016 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:47.016 "nvmf_tgt_2" 00:16:47.278 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:47.278 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:47.278 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:47.278 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:47.278 true 00:16:47.278 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:47.540 true 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.540 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.540 rmmod nvme_tcp 00:16:47.540 rmmod nvme_fabrics 00:16:47.540 rmmod nvme_keyring 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 620046 ']' 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 620046 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 620046 ']' 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 620046 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620046 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620046' 00:16:47.802 killing process with pid 620046 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 620046 00:16:47.802 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 620046 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.063 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.976 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:49.976 00:16:49.977 real 0m12.114s 00:16:49.977 user 0m10.287s 00:16:49.977 sys 0m6.344s 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 ************************************ 00:16:49.977 END TEST nvmf_multitarget 00:16:49.977 ************************************ 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.977 22:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 ************************************ 00:16:50.238 START TEST nvmf_rpc 00:16:50.238 ************************************ 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:50.238 * Looking for test storage... 00:16:50.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.238 --rc genhtml_branch_coverage=1 00:16:50.238 --rc genhtml_function_coverage=1 00:16:50.238 --rc genhtml_legend=1 00:16:50.238 --rc geninfo_all_blocks=1 00:16:50.238 --rc geninfo_unexecuted_blocks=1 00:16:50.238 00:16:50.238 ' 00:16:50.238 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.238 --rc genhtml_branch_coverage=1 00:16:50.238 --rc genhtml_function_coverage=1 00:16:50.238 --rc genhtml_legend=1 00:16:50.238 --rc geninfo_all_blocks=1 00:16:50.238 --rc geninfo_unexecuted_blocks=1 00:16:50.238 00:16:50.239 ' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:50.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.239 --rc genhtml_branch_coverage=1 00:16:50.239 --rc genhtml_function_coverage=1 00:16:50.239 --rc genhtml_legend=1 00:16:50.239 --rc geninfo_all_blocks=1 00:16:50.239 --rc geninfo_unexecuted_blocks=1 00:16:50.239 00:16:50.239 ' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:50.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.239 --rc genhtml_branch_coverage=1 00:16:50.239 --rc genhtml_function_coverage=1 00:16:50.239 --rc genhtml_legend=1 00:16:50.239 --rc geninfo_all_blocks=1 00:16:50.239 --rc geninfo_unexecuted_blocks=1 00:16:50.239 00:16:50.239 ' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.239 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:58.393 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:58.393 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:58.393 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:58.394 Found net devices under 0000:31:00.0: cvl_0_0 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:58.394 Found net devices under 0000:31:00.1: cvl_0_1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:58.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:16:58.394 00:16:58.394 --- 10.0.0.2 ping statistics --- 00:16:58.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.394 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:16:58.394 00:16:58.394 --- 10.0.0.1 ping statistics --- 00:16:58.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.394 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.394 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=624705 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 624705 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 624705 ']' 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.394 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.394 [2024-09-30 22:45:25.072118] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:16:58.394 [2024-09-30 22:45:25.072193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.394 [2024-09-30 22:45:25.163286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.394 [2024-09-30 22:45:25.261674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.394 [2024-09-30 22:45:25.261736] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.394 [2024-09-30 22:45:25.261745] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.394 [2024-09-30 22:45:25.261753] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.394 [2024-09-30 22:45:25.261759] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.394 [2024-09-30 22:45:25.261936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.394 [2024-09-30 22:45:25.262068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.394 [2024-09-30 22:45:25.262380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.394 [2024-09-30 22:45:25.262384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:58.968 "tick_rate": 2400000000, 00:16:58.968 "poll_groups": [ 00:16:58.968 { 00:16:58.968 "name": "nvmf_tgt_poll_group_000", 00:16:58.968 "admin_qpairs": 0, 00:16:58.968 "io_qpairs": 0, 00:16:58.968 "current_admin_qpairs": 0, 00:16:58.968 "current_io_qpairs": 0, 00:16:58.968 "pending_bdev_io": 0, 00:16:58.968 "completed_nvme_io": 0, 00:16:58.968 "transports": [] 00:16:58.968 }, 00:16:58.968 { 00:16:58.968 "name": "nvmf_tgt_poll_group_001", 00:16:58.968 "admin_qpairs": 0, 00:16:58.968 "io_qpairs": 0, 00:16:58.968 "current_admin_qpairs": 0, 00:16:58.968 "current_io_qpairs": 0, 00:16:58.968 "pending_bdev_io": 0, 00:16:58.968 "completed_nvme_io": 0, 00:16:58.968 "transports": [] 00:16:58.968 }, 00:16:58.968 { 00:16:58.968 "name": "nvmf_tgt_poll_group_002", 00:16:58.968 "admin_qpairs": 0, 00:16:58.968 "io_qpairs": 0, 00:16:58.968 "current_admin_qpairs": 0, 00:16:58.968 "current_io_qpairs": 0, 00:16:58.968 "pending_bdev_io": 0, 00:16:58.968 "completed_nvme_io": 0, 00:16:58.968 "transports": [] 00:16:58.968 }, 00:16:58.968 { 00:16:58.968 "name": "nvmf_tgt_poll_group_003", 00:16:58.968 "admin_qpairs": 0, 00:16:58.968 "io_qpairs": 0, 00:16:58.968 "current_admin_qpairs": 0, 00:16:58.968 "current_io_qpairs": 0, 00:16:58.968 "pending_bdev_io": 0, 00:16:58.968 "completed_nvme_io": 0, 00:16:58.968 "transports": [] 00:16:58.968 } 00:16:58.968 ] 00:16:58.968 }' 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:58.968 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 [2024-09-30 22:45:26.057308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:59.230 "tick_rate": 2400000000, 00:16:59.230 "poll_groups": [ 00:16:59.230 { 00:16:59.230 "name": "nvmf_tgt_poll_group_000", 00:16:59.230 "admin_qpairs": 0, 00:16:59.230 "io_qpairs": 0, 00:16:59.230 "current_admin_qpairs": 0, 00:16:59.230 "current_io_qpairs": 0, 00:16:59.230 "pending_bdev_io": 0, 00:16:59.230 "completed_nvme_io": 0, 00:16:59.230 "transports": [ 00:16:59.230 { 00:16:59.230 "trtype": "TCP" 00:16:59.230 } 00:16:59.230 ] 00:16:59.230 }, 00:16:59.230 { 00:16:59.230 "name": "nvmf_tgt_poll_group_001", 00:16:59.230 "admin_qpairs": 0, 00:16:59.230 "io_qpairs": 0, 00:16:59.230 "current_admin_qpairs": 0, 00:16:59.230 "current_io_qpairs": 0, 00:16:59.230 "pending_bdev_io": 0, 00:16:59.230 "completed_nvme_io": 0, 00:16:59.230 "transports": [ 00:16:59.230 { 00:16:59.230 "trtype": "TCP" 00:16:59.230 } 00:16:59.230 ] 00:16:59.230 }, 00:16:59.230 { 00:16:59.230 "name": "nvmf_tgt_poll_group_002", 00:16:59.230 "admin_qpairs": 0, 00:16:59.230 "io_qpairs": 0, 00:16:59.230 "current_admin_qpairs": 0, 00:16:59.230 "current_io_qpairs": 0, 00:16:59.230 "pending_bdev_io": 0, 00:16:59.230 "completed_nvme_io": 0, 00:16:59.230 "transports": [ 00:16:59.230 { 00:16:59.230 "trtype": "TCP" 00:16:59.230 } 00:16:59.230 ] 00:16:59.230 }, 00:16:59.230 { 00:16:59.230 "name": "nvmf_tgt_poll_group_003", 00:16:59.230 "admin_qpairs": 0, 00:16:59.230 "io_qpairs": 0, 00:16:59.230 "current_admin_qpairs": 0, 00:16:59.230 "current_io_qpairs": 0, 00:16:59.230 "pending_bdev_io": 0, 00:16:59.230 "completed_nvme_io": 0, 00:16:59.230 "transports": [ 00:16:59.230 { 00:16:59.230 "trtype": "TCP" 00:16:59.230 } 00:16:59.230 ] 00:16:59.230 } 00:16:59.230 ] 00:16:59.230 }' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 Malloc1 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.230 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.493 [2024-09-30 22:45:26.255595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:59.493 [2024-09-30 22:45:26.296191] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:59.493 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:59.493 could not add new controller: failed to write to nvme-fabrics device 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.493 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.878 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.878 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:00.878 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.878 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:00.878 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:02.879 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:03.163 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.163 [2024-09-30 22:45:30.048973] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:03.163 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:03.163 could not add new controller: failed to write to nvme-fabrics device 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.163 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.076 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.076 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:05.076 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.076 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:05.076 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:06.986 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:06.986 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:06.986 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.986 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.987 [2024-09-30 22:45:33.809313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.987 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.898 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.898 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:08.898 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.898 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:08.898 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:10.807 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 [2024-09-30 22:45:37.555908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.808 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.197 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.197 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.197 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.197 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:12.197 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 [2024-09-30 22:45:41.316712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.743 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.128 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.128 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:16.128 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.128 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:16.128 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.041 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.042 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.042 [2024-09-30 22:45:44.997127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.042 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:19.956 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.956 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:19.956 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.956 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:19.956 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 [2024-09-30 22:45:48.717703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.868 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.251 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.251 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:23.251 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.251 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:23.251 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 [2024-09-30 22:45:52.549599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 [2024-09-30 22:45:52.617750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.794 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 [2024-09-30 22:45:52.681913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 [2024-09-30 22:45:52.754130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.795 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 [2024-09-30 22:45:52.818324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.057 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:26.057 "tick_rate": 2400000000, 00:17:26.057 "poll_groups": [ 00:17:26.057 { 00:17:26.057 "name": "nvmf_tgt_poll_group_000", 00:17:26.057 "admin_qpairs": 0, 00:17:26.057 "io_qpairs": 224, 00:17:26.057 "current_admin_qpairs": 0, 00:17:26.057 "current_io_qpairs": 0, 00:17:26.057 "pending_bdev_io": 0, 00:17:26.057 "completed_nvme_io": 315, 00:17:26.057 "transports": [ 00:17:26.057 { 00:17:26.057 "trtype": "TCP" 00:17:26.057 } 00:17:26.057 ] 00:17:26.057 }, 00:17:26.057 { 00:17:26.057 "name": "nvmf_tgt_poll_group_001", 00:17:26.057 "admin_qpairs": 1, 00:17:26.058 "io_qpairs": 223, 00:17:26.058 "current_admin_qpairs": 0, 00:17:26.058 "current_io_qpairs": 0, 00:17:26.058 "pending_bdev_io": 0, 00:17:26.058 "completed_nvme_io": 224, 00:17:26.058 "transports": [ 00:17:26.058 { 00:17:26.058 "trtype": "TCP" 00:17:26.058 } 00:17:26.058 ] 00:17:26.058 }, 00:17:26.058 { 00:17:26.058 "name": "nvmf_tgt_poll_group_002", 00:17:26.058 "admin_qpairs": 6, 00:17:26.058 "io_qpairs": 218, 00:17:26.058 "current_admin_qpairs": 0, 00:17:26.058 "current_io_qpairs": 0, 00:17:26.058 "pending_bdev_io": 0, 00:17:26.058 "completed_nvme_io": 219, 00:17:26.058 "transports": [ 00:17:26.058 { 00:17:26.058 "trtype": "TCP" 00:17:26.058 } 00:17:26.058 ] 00:17:26.058 }, 00:17:26.058 { 00:17:26.058 "name": "nvmf_tgt_poll_group_003", 00:17:26.058 "admin_qpairs": 0, 00:17:26.058 "io_qpairs": 224, 00:17:26.058 "current_admin_qpairs": 0, 00:17:26.058 "current_io_qpairs": 0, 00:17:26.058 "pending_bdev_io": 0, 00:17:26.058 "completed_nvme_io": 481, 00:17:26.058 "transports": [ 00:17:26.058 { 00:17:26.058 "trtype": "TCP" 00:17:26.058 } 00:17:26.058 ] 00:17:26.058 } 00:17:26.058 ] 00:17:26.058 }' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.058 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.058 rmmod nvme_tcp 00:17:26.058 rmmod nvme_fabrics 00:17:26.058 rmmod nvme_keyring 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 624705 ']' 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 624705 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 624705 ']' 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 624705 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.058 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 624705 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 624705' 00:17:26.319 killing process with pid 624705 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 624705 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 624705 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.319 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.863 00:17:28.863 real 0m38.341s 00:17:28.863 user 1m53.935s 00:17:28.863 sys 0m8.165s 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 ************************************ 00:17:28.863 END TEST nvmf_rpc 00:17:28.863 ************************************ 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 ************************************ 00:17:28.863 START TEST nvmf_invalid 00:17:28.863 ************************************ 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.863 * Looking for test storage... 00:17:28.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.863 --rc genhtml_branch_coverage=1 00:17:28.863 --rc genhtml_function_coverage=1 00:17:28.863 --rc genhtml_legend=1 00:17:28.863 --rc geninfo_all_blocks=1 00:17:28.863 --rc geninfo_unexecuted_blocks=1 00:17:28.863 00:17:28.863 ' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.863 --rc genhtml_branch_coverage=1 00:17:28.863 --rc genhtml_function_coverage=1 00:17:28.863 --rc genhtml_legend=1 00:17:28.863 --rc geninfo_all_blocks=1 00:17:28.863 --rc geninfo_unexecuted_blocks=1 00:17:28.863 00:17:28.863 ' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.863 --rc genhtml_branch_coverage=1 00:17:28.863 --rc genhtml_function_coverage=1 00:17:28.863 --rc genhtml_legend=1 00:17:28.863 --rc geninfo_all_blocks=1 00:17:28.863 --rc geninfo_unexecuted_blocks=1 00:17:28.863 00:17:28.863 ' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.863 --rc genhtml_branch_coverage=1 00:17:28.863 --rc genhtml_function_coverage=1 00:17:28.863 --rc genhtml_legend=1 00:17:28.863 --rc geninfo_all_blocks=1 00:17:28.863 --rc geninfo_unexecuted_blocks=1 00:17:28.863 00:17:28.863 ' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.863 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.864 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:37.039 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.039 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:37.040 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:37.040 Found net devices under 0000:31:00.0: cvl_0_0 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:37.040 Found net devices under 0000:31:00.1: cvl_0_1 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.040 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:17:37.040 00:17:37.040 --- 10.0.0.2 ping statistics --- 00:17:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.040 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:17:37.040 00:17:37.040 --- 10.0.0.1 ping statistics --- 00:17:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.040 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=634604 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 634604 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 634604 ']' 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.040 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.040 [2024-09-30 22:46:03.392338] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:17:37.040 [2024-09-30 22:46:03.392407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.040 [2024-09-30 22:46:03.481886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.040 [2024-09-30 22:46:03.580301] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.040 [2024-09-30 22:46:03.580361] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.040 [2024-09-30 22:46:03.580370] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.040 [2024-09-30 22:46:03.580377] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.040 [2024-09-30 22:46:03.580383] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.040 [2024-09-30 22:46:03.580550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.040 [2024-09-30 22:46:03.580712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.040 [2024-09-30 22:46:03.580870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.040 [2024-09-30 22:46:03.580871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:37.302 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22888 00:17:37.564 [2024-09-30 22:46:04.432769] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:37.564 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:37.564 { 00:17:37.564 "nqn": "nqn.2016-06.io.spdk:cnode22888", 00:17:37.564 "tgt_name": "foobar", 00:17:37.564 "method": "nvmf_create_subsystem", 00:17:37.564 "req_id": 1 00:17:37.564 } 00:17:37.564 Got JSON-RPC error response 00:17:37.564 response: 00:17:37.564 { 00:17:37.564 "code": -32603, 00:17:37.564 "message": "Unable to find target foobar" 00:17:37.564 }' 00:17:37.564 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:37.564 { 00:17:37.564 "nqn": "nqn.2016-06.io.spdk:cnode22888", 00:17:37.564 "tgt_name": "foobar", 00:17:37.564 "method": "nvmf_create_subsystem", 00:17:37.564 "req_id": 1 00:17:37.564 } 00:17:37.564 Got JSON-RPC error response 00:17:37.564 response: 00:17:37.564 { 00:17:37.564 "code": -32603, 00:17:37.564 "message": "Unable to find target foobar" 00:17:37.564 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:37.564 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:37.564 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26829 00:17:37.825 [2024-09-30 22:46:04.641585] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26829: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:37.825 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:37.825 { 00:17:37.825 "nqn": "nqn.2016-06.io.spdk:cnode26829", 00:17:37.825 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:37.825 "method": "nvmf_create_subsystem", 00:17:37.825 "req_id": 1 00:17:37.825 } 00:17:37.825 Got JSON-RPC error response 00:17:37.825 response: 00:17:37.825 { 00:17:37.825 "code": -32602, 00:17:37.825 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:37.825 }' 00:17:37.825 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:37.825 { 00:17:37.825 "nqn": "nqn.2016-06.io.spdk:cnode26829", 00:17:37.825 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:37.825 "method": "nvmf_create_subsystem", 00:17:37.825 "req_id": 1 00:17:37.825 } 00:17:37.825 Got JSON-RPC error response 00:17:37.825 response: 00:17:37.825 { 00:17:37.825 "code": -32602, 00:17:37.825 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:37.825 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:37.825 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:37.825 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30854 00:17:38.086 [2024-09-30 22:46:04.854299] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30854: invalid model number 'SPDK_Controller' 00:17:38.086 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:38.086 { 00:17:38.086 "nqn": "nqn.2016-06.io.spdk:cnode30854", 00:17:38.086 "model_number": "SPDK_Controller\u001f", 00:17:38.087 "method": "nvmf_create_subsystem", 00:17:38.087 "req_id": 1 00:17:38.087 } 00:17:38.087 Got JSON-RPC error response 00:17:38.087 response: 00:17:38.087 { 00:17:38.087 "code": -32602, 00:17:38.087 "message": "Invalid MN SPDK_Controller\u001f" 00:17:38.087 }' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:38.087 { 00:17:38.087 "nqn": "nqn.2016-06.io.spdk:cnode30854", 00:17:38.087 "model_number": "SPDK_Controller\u001f", 00:17:38.087 "method": "nvmf_create_subsystem", 00:17:38.087 "req_id": 1 00:17:38.087 } 00:17:38.087 Got JSON-RPC error response 00:17:38.087 response: 00:17:38.087 { 00:17:38.087 "code": -32602, 00:17:38.087 "message": "Invalid MN SPDK_Controller\u001f" 00:17:38.087 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:38.087 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:38.087 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!B]Ve@/VHj490Dn|P?vAF' 00:17:38.088 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!B]Ve@/VHj490Dn|P?vAF' nqn.2016-06.io.spdk:cnode11400 00:17:38.349 [2024-09-30 22:46:05.235745] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11400: invalid serial number '!B]Ve@/VHj490Dn|P?vAF' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:38.349 { 00:17:38.349 "nqn": "nqn.2016-06.io.spdk:cnode11400", 00:17:38.349 "serial_number": "!B]Ve@/VHj490Dn|P?vAF", 00:17:38.349 "method": "nvmf_create_subsystem", 00:17:38.349 "req_id": 1 00:17:38.349 } 00:17:38.349 Got JSON-RPC error response 00:17:38.349 response: 00:17:38.349 { 00:17:38.349 "code": -32602, 00:17:38.349 "message": "Invalid SN !B]Ve@/VHj490Dn|P?vAF" 00:17:38.349 }' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:38.349 { 00:17:38.349 "nqn": "nqn.2016-06.io.spdk:cnode11400", 00:17:38.349 "serial_number": "!B]Ve@/VHj490Dn|P?vAF", 00:17:38.349 "method": "nvmf_create_subsystem", 00:17:38.349 "req_id": 1 00:17:38.349 } 00:17:38.349 Got JSON-RPC error response 00:17:38.349 response: 00:17:38.349 { 00:17:38.349 "code": -32602, 00:17:38.349 "message": "Invalid SN !B]Ve@/VHj490Dn|P?vAF" 00:17:38.349 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.349 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.350 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:38.611 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'li`$UU8:D ~?E0qzdo0-' 00:17:38.612 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'li`$UU8:D ~?E0qzdo0-' nqn.2016-06.io.spdk:cnode17090 00:17:38.873 [2024-09-30 22:46:05.773816] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17090: invalid model number 'li`$UU8:D ~?E0qzdo0-' 00:17:38.873 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:38.873 { 00:17:38.873 "nqn": "nqn.2016-06.io.spdk:cnode17090", 00:17:38.873 "model_number": "li`$UU8:D ~?E0qzdo0-", 00:17:38.873 "method": "nvmf_create_subsystem", 00:17:38.873 "req_id": 1 00:17:38.873 } 00:17:38.873 Got JSON-RPC error response 00:17:38.873 response: 00:17:38.873 { 00:17:38.873 "code": -32602, 00:17:38.873 "message": "Invalid MN li`$UU8:D ~?E0qzdo0-" 00:17:38.873 }' 00:17:38.873 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:38.873 { 00:17:38.873 "nqn": "nqn.2016-06.io.spdk:cnode17090", 00:17:38.873 "model_number": "li`$UU8:D ~?E0qzdo0-", 00:17:38.873 "method": "nvmf_create_subsystem", 00:17:38.873 "req_id": 1 00:17:38.873 } 00:17:38.873 Got JSON-RPC error response 00:17:38.873 response: 00:17:38.873 { 00:17:38.873 "code": -32602, 00:17:38.873 "message": "Invalid MN li`$UU8:D ~?E0qzdo0-" 00:17:38.873 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:38.873 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:39.134 [2024-09-30 22:46:05.978708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.134 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:39.394 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:39.394 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:39.394 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:39.394 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:39.394 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:39.394 [2024-09-30 22:46:06.380053] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:39.654 { 00:17:39.654 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:39.654 "listen_address": { 00:17:39.654 "trtype": "tcp", 00:17:39.654 "traddr": "", 00:17:39.654 "trsvcid": "4421" 00:17:39.654 }, 00:17:39.654 "method": "nvmf_subsystem_remove_listener", 00:17:39.654 "req_id": 1 00:17:39.654 } 00:17:39.654 Got JSON-RPC error response 00:17:39.654 response: 00:17:39.654 { 00:17:39.654 "code": -32602, 00:17:39.654 "message": "Invalid parameters" 00:17:39.654 }' 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:39.654 { 00:17:39.654 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:39.654 "listen_address": { 00:17:39.654 "trtype": "tcp", 00:17:39.654 "traddr": "", 00:17:39.654 "trsvcid": "4421" 00:17:39.654 }, 00:17:39.654 "method": "nvmf_subsystem_remove_listener", 00:17:39.654 "req_id": 1 00:17:39.654 } 00:17:39.654 Got JSON-RPC error response 00:17:39.654 response: 00:17:39.654 { 00:17:39.654 "code": -32602, 00:17:39.654 "message": "Invalid parameters" 00:17:39.654 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28992 -i 0 00:17:39.654 [2024-09-30 22:46:06.568633] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28992: invalid cntlid range [0-65519] 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:39.654 { 00:17:39.654 "nqn": "nqn.2016-06.io.spdk:cnode28992", 00:17:39.654 "min_cntlid": 0, 00:17:39.654 "method": "nvmf_create_subsystem", 00:17:39.654 "req_id": 1 00:17:39.654 } 00:17:39.654 Got JSON-RPC error response 00:17:39.654 response: 00:17:39.654 { 00:17:39.654 "code": -32602, 00:17:39.654 "message": "Invalid cntlid range [0-65519]" 00:17:39.654 }' 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:39.654 { 00:17:39.654 "nqn": "nqn.2016-06.io.spdk:cnode28992", 00:17:39.654 "min_cntlid": 0, 00:17:39.654 "method": "nvmf_create_subsystem", 00:17:39.654 "req_id": 1 00:17:39.654 } 00:17:39.654 Got JSON-RPC error response 00:17:39.654 response: 00:17:39.654 { 00:17:39.654 "code": -32602, 00:17:39.654 "message": "Invalid cntlid range [0-65519]" 00:17:39.654 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.654 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2843 -i 65520 00:17:39.913 [2024-09-30 22:46:06.757251] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2843: invalid cntlid range [65520-65519] 00:17:39.913 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:39.913 { 00:17:39.913 "nqn": "nqn.2016-06.io.spdk:cnode2843", 00:17:39.913 "min_cntlid": 65520, 00:17:39.913 "method": "nvmf_create_subsystem", 00:17:39.913 "req_id": 1 00:17:39.913 } 00:17:39.913 Got JSON-RPC error response 00:17:39.913 response: 00:17:39.913 { 00:17:39.913 "code": -32602, 00:17:39.913 "message": "Invalid cntlid range [65520-65519]" 00:17:39.913 }' 00:17:39.913 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:39.913 { 00:17:39.913 "nqn": "nqn.2016-06.io.spdk:cnode2843", 00:17:39.913 "min_cntlid": 65520, 00:17:39.913 "method": "nvmf_create_subsystem", 00:17:39.913 "req_id": 1 00:17:39.913 } 00:17:39.913 Got JSON-RPC error response 00:17:39.913 response: 00:17:39.913 { 00:17:39.913 "code": -32602, 00:17:39.913 "message": "Invalid cntlid range [65520-65519]" 00:17:39.913 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.913 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9984 -I 0 00:17:40.173 [2024-09-30 22:46:06.945810] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9984: invalid cntlid range [1-0] 00:17:40.173 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:40.173 { 00:17:40.173 "nqn": "nqn.2016-06.io.spdk:cnode9984", 00:17:40.173 "max_cntlid": 0, 00:17:40.173 "method": "nvmf_create_subsystem", 00:17:40.173 "req_id": 1 00:17:40.173 } 00:17:40.173 Got JSON-RPC error response 00:17:40.173 response: 00:17:40.173 { 00:17:40.173 "code": -32602, 00:17:40.173 "message": "Invalid cntlid range [1-0]" 00:17:40.173 }' 00:17:40.173 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:40.173 { 00:17:40.173 "nqn": "nqn.2016-06.io.spdk:cnode9984", 00:17:40.173 "max_cntlid": 0, 00:17:40.173 "method": "nvmf_create_subsystem", 00:17:40.173 "req_id": 1 00:17:40.173 } 00:17:40.173 Got JSON-RPC error response 00:17:40.173 response: 00:17:40.173 { 00:17:40.173 "code": -32602, 00:17:40.173 "message": "Invalid cntlid range [1-0]" 00:17:40.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.173 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8336 -I 65520 00:17:40.173 [2024-09-30 22:46:07.130403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8336: invalid cntlid range [1-65520] 00:17:40.173 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:40.173 { 00:17:40.173 "nqn": "nqn.2016-06.io.spdk:cnode8336", 00:17:40.173 "max_cntlid": 65520, 00:17:40.173 "method": "nvmf_create_subsystem", 00:17:40.174 "req_id": 1 00:17:40.174 } 00:17:40.174 Got JSON-RPC error response 00:17:40.174 response: 00:17:40.174 { 00:17:40.174 "code": -32602, 00:17:40.174 "message": "Invalid cntlid range [1-65520]" 00:17:40.174 }' 00:17:40.174 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:40.174 { 00:17:40.174 "nqn": "nqn.2016-06.io.spdk:cnode8336", 00:17:40.174 "max_cntlid": 65520, 00:17:40.174 "method": "nvmf_create_subsystem", 00:17:40.174 "req_id": 1 00:17:40.174 } 00:17:40.174 Got JSON-RPC error response 00:17:40.174 response: 00:17:40.174 { 00:17:40.174 "code": -32602, 00:17:40.174 "message": "Invalid cntlid range [1-65520]" 00:17:40.174 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.174 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11272 -i 6 -I 5 00:17:40.434 [2024-09-30 22:46:07.318983] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11272: invalid cntlid range [6-5] 00:17:40.434 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:40.434 { 00:17:40.434 "nqn": "nqn.2016-06.io.spdk:cnode11272", 00:17:40.434 "min_cntlid": 6, 00:17:40.434 "max_cntlid": 5, 00:17:40.434 "method": "nvmf_create_subsystem", 00:17:40.434 "req_id": 1 00:17:40.434 } 00:17:40.434 Got JSON-RPC error response 00:17:40.434 response: 00:17:40.434 { 00:17:40.434 "code": -32602, 00:17:40.434 "message": "Invalid cntlid range [6-5]" 00:17:40.434 }' 00:17:40.434 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:40.434 { 00:17:40.434 "nqn": "nqn.2016-06.io.spdk:cnode11272", 00:17:40.434 "min_cntlid": 6, 00:17:40.434 "max_cntlid": 5, 00:17:40.434 "method": "nvmf_create_subsystem", 00:17:40.434 "req_id": 1 00:17:40.434 } 00:17:40.434 Got JSON-RPC error response 00:17:40.434 response: 00:17:40.434 { 00:17:40.434 "code": -32602, 00:17:40.434 "message": "Invalid cntlid range [6-5]" 00:17:40.434 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.434 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:40.694 { 00:17:40.694 "name": "foobar", 00:17:40.694 "method": "nvmf_delete_target", 00:17:40.694 "req_id": 1 00:17:40.694 } 00:17:40.694 Got JSON-RPC error response 00:17:40.694 response: 00:17:40.694 { 00:17:40.694 "code": -32602, 00:17:40.694 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:40.694 }' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:40.694 { 00:17:40.694 "name": "foobar", 00:17:40.694 "method": "nvmf_delete_target", 00:17:40.694 "req_id": 1 00:17:40.694 } 00:17:40.694 Got JSON-RPC error response 00:17:40.694 response: 00:17:40.694 { 00:17:40.694 "code": -32602, 00:17:40.694 "message": "The specified target doesn't exist, cannot delete it." 00:17:40.694 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.694 rmmod nvme_tcp 00:17:40.694 rmmod nvme_fabrics 00:17:40.694 rmmod nvme_keyring 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 634604 ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 634604 ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 634604' 00:17:40.694 killing process with pid 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 634604 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:40.694 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:40.953 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.953 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.953 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.953 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.953 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.859 00:17:42.859 real 0m14.369s 00:17:42.859 user 0m21.185s 00:17:42.859 sys 0m6.828s 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:42.859 ************************************ 00:17:42.859 END TEST nvmf_invalid 00:17:42.859 ************************************ 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.859 ************************************ 00:17:42.859 START TEST nvmf_connect_stress 00:17:42.859 ************************************ 00:17:42.859 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:43.121 * Looking for test storage... 00:17:43.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.121 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:43.121 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:43.121 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.121 --rc genhtml_branch_coverage=1 00:17:43.121 --rc genhtml_function_coverage=1 00:17:43.121 --rc genhtml_legend=1 00:17:43.121 --rc geninfo_all_blocks=1 00:17:43.121 --rc geninfo_unexecuted_blocks=1 00:17:43.121 00:17:43.121 ' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.121 --rc genhtml_branch_coverage=1 00:17:43.121 --rc genhtml_function_coverage=1 00:17:43.121 --rc genhtml_legend=1 00:17:43.121 --rc geninfo_all_blocks=1 00:17:43.121 --rc geninfo_unexecuted_blocks=1 00:17:43.121 00:17:43.121 ' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.121 --rc genhtml_branch_coverage=1 00:17:43.121 --rc genhtml_function_coverage=1 00:17:43.121 --rc genhtml_legend=1 00:17:43.121 --rc geninfo_all_blocks=1 00:17:43.121 --rc geninfo_unexecuted_blocks=1 00:17:43.121 00:17:43.121 ' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.121 --rc genhtml_branch_coverage=1 00:17:43.121 --rc genhtml_function_coverage=1 00:17:43.121 --rc genhtml_legend=1 00:17:43.121 --rc geninfo_all_blocks=1 00:17:43.121 --rc geninfo_unexecuted_blocks=1 00:17:43.121 00:17:43.121 ' 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.121 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.122 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:51.264 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.265 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.265 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.265 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.265 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:17:51.265 00:17:51.265 --- 10.0.0.2 ping statistics --- 00:17:51.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.265 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:17:51.265 00:17:51.265 --- 10.0.0.1 ping statistics --- 00:17:51.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.265 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.265 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=639991 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 639991 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 639991 ']' 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.266 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.266 [2024-09-30 22:46:17.863080] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:17:51.266 [2024-09-30 22:46:17.863142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.266 [2024-09-30 22:46:17.955855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.266 [2024-09-30 22:46:18.052014] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.266 [2024-09-30 22:46:18.052080] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.266 [2024-09-30 22:46:18.052088] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.266 [2024-09-30 22:46:18.052095] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.266 [2024-09-30 22:46:18.052102] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.266 [2024-09-30 22:46:18.052268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.266 [2024-09-30 22:46:18.052428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.266 [2024-09-30 22:46:18.052429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.985 [2024-09-30 22:46:18.746930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.985 [2024-09-30 22:46:18.783599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.985 NULL1 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=640176 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.985 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.986 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.247 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.247 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:52.247 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.247 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.247 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.819 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.819 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:52.819 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.819 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.819 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.080 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.080 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:53.080 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.080 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.080 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.343 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.343 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:53.343 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.343 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.343 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.605 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.605 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:53.605 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.605 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.605 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.866 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.866 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:53.866 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.866 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.866 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.438 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.438 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:54.438 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.438 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.438 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.698 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.698 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:54.698 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.699 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.699 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:54.959 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.959 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.220 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.220 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:55.220 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.220 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.220 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.480 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.480 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:55.480 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.480 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.480 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.051 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.051 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:56.051 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.051 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.051 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:56.311 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.311 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.572 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.572 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:56.572 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.572 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.572 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.833 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.833 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:56.833 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.833 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.833 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.404 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.404 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:57.404 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.405 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.405 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.665 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.665 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:57.665 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.665 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.665 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.927 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.927 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:57.927 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.927 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.927 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.188 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.188 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:58.188 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.188 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.188 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.447 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.447 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:58.448 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.448 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.448 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.018 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.018 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:59.018 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.018 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.018 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.278 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.278 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:59.278 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.278 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.278 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.539 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.539 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:59.539 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.539 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.539 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.800 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.800 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:17:59.800 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.800 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.800 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.061 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.061 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:00.061 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.061 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.061 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.632 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.632 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:00.632 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.632 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.632 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.892 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.892 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:00.892 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.892 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.892 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.153 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.153 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:01.153 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.153 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.153 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.419 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.419 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:01.419 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.419 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.419 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.681 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.681 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:01.681 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.681 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.681 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.942 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.202 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.202 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 640176 00:18:02.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (640176) - No such process 00:18:02.202 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 640176 00:18:02.202 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.202 rmmod nvme_tcp 00:18:02.202 rmmod nvme_fabrics 00:18:02.202 rmmod nvme_keyring 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 639991 ']' 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 639991 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 639991 ']' 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 639991 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 639991 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 639991' 00:18:02.202 killing process with pid 639991 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 639991 00:18:02.202 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 639991 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.463 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:04.373 00:18:04.373 real 0m21.462s 00:18:04.373 user 0m42.116s 00:18:04.373 sys 0m9.534s 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.373 ************************************ 00:18:04.373 END TEST nvmf_connect_stress 00:18:04.373 ************************************ 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:04.373 22:46:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.635 ************************************ 00:18:04.635 START TEST nvmf_fused_ordering 00:18:04.635 ************************************ 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:04.635 * Looking for test storage... 00:18:04.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:04.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.635 --rc genhtml_branch_coverage=1 00:18:04.635 --rc genhtml_function_coverage=1 00:18:04.635 --rc genhtml_legend=1 00:18:04.635 --rc geninfo_all_blocks=1 00:18:04.635 --rc geninfo_unexecuted_blocks=1 00:18:04.635 00:18:04.635 ' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:04.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.635 --rc genhtml_branch_coverage=1 00:18:04.635 --rc genhtml_function_coverage=1 00:18:04.635 --rc genhtml_legend=1 00:18:04.635 --rc geninfo_all_blocks=1 00:18:04.635 --rc geninfo_unexecuted_blocks=1 00:18:04.635 00:18:04.635 ' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:04.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.635 --rc genhtml_branch_coverage=1 00:18:04.635 --rc genhtml_function_coverage=1 00:18:04.635 --rc genhtml_legend=1 00:18:04.635 --rc geninfo_all_blocks=1 00:18:04.635 --rc geninfo_unexecuted_blocks=1 00:18:04.635 00:18:04.635 ' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:04.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.635 --rc genhtml_branch_coverage=1 00:18:04.635 --rc genhtml_function_coverage=1 00:18:04.635 --rc genhtml_legend=1 00:18:04.635 --rc geninfo_all_blocks=1 00:18:04.635 --rc geninfo_unexecuted_blocks=1 00:18:04.635 00:18:04.635 ' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.635 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.636 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.898 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:13.045 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:13.045 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:13.045 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:13.046 Found net devices under 0000:31:00.0: cvl_0_0 00:18:13.046 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:13.046 Found net devices under 0000:31:00.1: cvl_0_1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:13.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:18:13.046 00:18:13.046 --- 10.0.0.2 ping statistics --- 00:18:13.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.046 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:13.046 00:18:13.046 --- 10.0.0.1 ping statistics --- 00:18:13.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.046 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=646599 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 646599 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 646599 ']' 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.046 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.046 [2024-09-30 22:46:39.436325] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:18:13.047 [2024-09-30 22:46:39.436385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.047 [2024-09-30 22:46:39.528393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.047 [2024-09-30 22:46:39.622565] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.047 [2024-09-30 22:46:39.622626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.047 [2024-09-30 22:46:39.622634] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.047 [2024-09-30 22:46:39.622642] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.047 [2024-09-30 22:46:39.622648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.047 [2024-09-30 22:46:39.622674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.308 [2024-09-30 22:46:40.311447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.308 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.570 [2024-09-30 22:46:40.335787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.570 NULL1 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.570 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:13.570 [2024-09-30 22:46:40.406486] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:18:13.570 [2024-09-30 22:46:40.406531] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646645 ] 00:18:14.143 Attached to nqn.2016-06.io.spdk:cnode1 00:18:14.143 Namespace ID: 1 size: 1GB 00:18:14.143 fused_ordering(0) 00:18:14.143 fused_ordering(1) 00:18:14.143 fused_ordering(2) 00:18:14.143 fused_ordering(3) 00:18:14.143 fused_ordering(4) 00:18:14.143 fused_ordering(5) 00:18:14.143 fused_ordering(6) 00:18:14.143 fused_ordering(7) 00:18:14.143 fused_ordering(8) 00:18:14.143 fused_ordering(9) 00:18:14.143 fused_ordering(10) 00:18:14.143 fused_ordering(11) 00:18:14.143 fused_ordering(12) 00:18:14.143 fused_ordering(13) 00:18:14.143 fused_ordering(14) 00:18:14.143 fused_ordering(15) 00:18:14.143 fused_ordering(16) 00:18:14.143 fused_ordering(17) 00:18:14.143 fused_ordering(18) 00:18:14.143 fused_ordering(19) 00:18:14.143 fused_ordering(20) 00:18:14.143 fused_ordering(21) 00:18:14.143 fused_ordering(22) 00:18:14.143 fused_ordering(23) 00:18:14.143 fused_ordering(24) 00:18:14.143 fused_ordering(25) 00:18:14.143 fused_ordering(26) 00:18:14.143 fused_ordering(27) 00:18:14.143 fused_ordering(28) 00:18:14.143 fused_ordering(29) 00:18:14.143 fused_ordering(30) 00:18:14.143 fused_ordering(31) 00:18:14.143 fused_ordering(32) 00:18:14.143 fused_ordering(33) 00:18:14.143 fused_ordering(34) 00:18:14.143 fused_ordering(35) 00:18:14.143 fused_ordering(36) 00:18:14.143 fused_ordering(37) 00:18:14.143 fused_ordering(38) 00:18:14.143 fused_ordering(39) 00:18:14.143 fused_ordering(40) 00:18:14.143 fused_ordering(41) 00:18:14.143 fused_ordering(42) 00:18:14.143 fused_ordering(43) 00:18:14.143 fused_ordering(44) 00:18:14.143 fused_ordering(45) 00:18:14.143 fused_ordering(46) 00:18:14.143 fused_ordering(47) 00:18:14.143 fused_ordering(48) 00:18:14.143 fused_ordering(49) 00:18:14.143 fused_ordering(50) 00:18:14.143 fused_ordering(51) 00:18:14.143 fused_ordering(52) 00:18:14.143 fused_ordering(53) 00:18:14.143 fused_ordering(54) 00:18:14.143 fused_ordering(55) 00:18:14.143 fused_ordering(56) 00:18:14.143 fused_ordering(57) 00:18:14.143 fused_ordering(58) 00:18:14.143 fused_ordering(59) 00:18:14.143 fused_ordering(60) 00:18:14.143 fused_ordering(61) 00:18:14.143 fused_ordering(62) 00:18:14.143 fused_ordering(63) 00:18:14.143 fused_ordering(64) 00:18:14.143 fused_ordering(65) 00:18:14.143 fused_ordering(66) 00:18:14.143 fused_ordering(67) 00:18:14.143 fused_ordering(68) 00:18:14.143 fused_ordering(69) 00:18:14.143 fused_ordering(70) 00:18:14.143 fused_ordering(71) 00:18:14.143 fused_ordering(72) 00:18:14.143 fused_ordering(73) 00:18:14.143 fused_ordering(74) 00:18:14.143 fused_ordering(75) 00:18:14.143 fused_ordering(76) 00:18:14.143 fused_ordering(77) 00:18:14.143 fused_ordering(78) 00:18:14.143 fused_ordering(79) 00:18:14.143 fused_ordering(80) 00:18:14.143 fused_ordering(81) 00:18:14.143 fused_ordering(82) 00:18:14.143 fused_ordering(83) 00:18:14.143 fused_ordering(84) 00:18:14.143 fused_ordering(85) 00:18:14.143 fused_ordering(86) 00:18:14.143 fused_ordering(87) 00:18:14.143 fused_ordering(88) 00:18:14.143 fused_ordering(89) 00:18:14.143 fused_ordering(90) 00:18:14.143 fused_ordering(91) 00:18:14.143 fused_ordering(92) 00:18:14.143 fused_ordering(93) 00:18:14.143 fused_ordering(94) 00:18:14.143 fused_ordering(95) 00:18:14.143 fused_ordering(96) 00:18:14.143 fused_ordering(97) 00:18:14.143 fused_ordering(98) 00:18:14.143 fused_ordering(99) 00:18:14.143 fused_ordering(100) 00:18:14.143 fused_ordering(101) 00:18:14.143 fused_ordering(102) 00:18:14.143 fused_ordering(103) 00:18:14.143 fused_ordering(104) 00:18:14.143 fused_ordering(105) 00:18:14.143 fused_ordering(106) 00:18:14.143 fused_ordering(107) 00:18:14.143 fused_ordering(108) 00:18:14.143 fused_ordering(109) 00:18:14.143 fused_ordering(110) 00:18:14.143 fused_ordering(111) 00:18:14.143 fused_ordering(112) 00:18:14.143 fused_ordering(113) 00:18:14.143 fused_ordering(114) 00:18:14.143 fused_ordering(115) 00:18:14.143 fused_ordering(116) 00:18:14.143 fused_ordering(117) 00:18:14.143 fused_ordering(118) 00:18:14.143 fused_ordering(119) 00:18:14.143 fused_ordering(120) 00:18:14.143 fused_ordering(121) 00:18:14.143 fused_ordering(122) 00:18:14.143 fused_ordering(123) 00:18:14.143 fused_ordering(124) 00:18:14.143 fused_ordering(125) 00:18:14.143 fused_ordering(126) 00:18:14.143 fused_ordering(127) 00:18:14.143 fused_ordering(128) 00:18:14.143 fused_ordering(129) 00:18:14.143 fused_ordering(130) 00:18:14.143 fused_ordering(131) 00:18:14.143 fused_ordering(132) 00:18:14.143 fused_ordering(133) 00:18:14.143 fused_ordering(134) 00:18:14.143 fused_ordering(135) 00:18:14.143 fused_ordering(136) 00:18:14.143 fused_ordering(137) 00:18:14.143 fused_ordering(138) 00:18:14.143 fused_ordering(139) 00:18:14.143 fused_ordering(140) 00:18:14.143 fused_ordering(141) 00:18:14.143 fused_ordering(142) 00:18:14.143 fused_ordering(143) 00:18:14.143 fused_ordering(144) 00:18:14.143 fused_ordering(145) 00:18:14.143 fused_ordering(146) 00:18:14.143 fused_ordering(147) 00:18:14.143 fused_ordering(148) 00:18:14.143 fused_ordering(149) 00:18:14.143 fused_ordering(150) 00:18:14.143 fused_ordering(151) 00:18:14.143 fused_ordering(152) 00:18:14.143 fused_ordering(153) 00:18:14.143 fused_ordering(154) 00:18:14.143 fused_ordering(155) 00:18:14.143 fused_ordering(156) 00:18:14.143 fused_ordering(157) 00:18:14.143 fused_ordering(158) 00:18:14.143 fused_ordering(159) 00:18:14.143 fused_ordering(160) 00:18:14.143 fused_ordering(161) 00:18:14.143 fused_ordering(162) 00:18:14.143 fused_ordering(163) 00:18:14.143 fused_ordering(164) 00:18:14.143 fused_ordering(165) 00:18:14.143 fused_ordering(166) 00:18:14.143 fused_ordering(167) 00:18:14.143 fused_ordering(168) 00:18:14.143 fused_ordering(169) 00:18:14.143 fused_ordering(170) 00:18:14.143 fused_ordering(171) 00:18:14.143 fused_ordering(172) 00:18:14.143 fused_ordering(173) 00:18:14.143 fused_ordering(174) 00:18:14.143 fused_ordering(175) 00:18:14.143 fused_ordering(176) 00:18:14.143 fused_ordering(177) 00:18:14.143 fused_ordering(178) 00:18:14.143 fused_ordering(179) 00:18:14.143 fused_ordering(180) 00:18:14.143 fused_ordering(181) 00:18:14.143 fused_ordering(182) 00:18:14.143 fused_ordering(183) 00:18:14.143 fused_ordering(184) 00:18:14.143 fused_ordering(185) 00:18:14.143 fused_ordering(186) 00:18:14.143 fused_ordering(187) 00:18:14.143 fused_ordering(188) 00:18:14.143 fused_ordering(189) 00:18:14.143 fused_ordering(190) 00:18:14.143 fused_ordering(191) 00:18:14.143 fused_ordering(192) 00:18:14.143 fused_ordering(193) 00:18:14.143 fused_ordering(194) 00:18:14.143 fused_ordering(195) 00:18:14.143 fused_ordering(196) 00:18:14.143 fused_ordering(197) 00:18:14.143 fused_ordering(198) 00:18:14.143 fused_ordering(199) 00:18:14.143 fused_ordering(200) 00:18:14.143 fused_ordering(201) 00:18:14.143 fused_ordering(202) 00:18:14.143 fused_ordering(203) 00:18:14.143 fused_ordering(204) 00:18:14.143 fused_ordering(205) 00:18:14.404 fused_ordering(206) 00:18:14.404 fused_ordering(207) 00:18:14.404 fused_ordering(208) 00:18:14.404 fused_ordering(209) 00:18:14.404 fused_ordering(210) 00:18:14.404 fused_ordering(211) 00:18:14.404 fused_ordering(212) 00:18:14.404 fused_ordering(213) 00:18:14.404 fused_ordering(214) 00:18:14.404 fused_ordering(215) 00:18:14.404 fused_ordering(216) 00:18:14.404 fused_ordering(217) 00:18:14.404 fused_ordering(218) 00:18:14.404 fused_ordering(219) 00:18:14.404 fused_ordering(220) 00:18:14.404 fused_ordering(221) 00:18:14.404 fused_ordering(222) 00:18:14.404 fused_ordering(223) 00:18:14.404 fused_ordering(224) 00:18:14.404 fused_ordering(225) 00:18:14.404 fused_ordering(226) 00:18:14.404 fused_ordering(227) 00:18:14.404 fused_ordering(228) 00:18:14.404 fused_ordering(229) 00:18:14.404 fused_ordering(230) 00:18:14.404 fused_ordering(231) 00:18:14.404 fused_ordering(232) 00:18:14.404 fused_ordering(233) 00:18:14.404 fused_ordering(234) 00:18:14.404 fused_ordering(235) 00:18:14.404 fused_ordering(236) 00:18:14.404 fused_ordering(237) 00:18:14.404 fused_ordering(238) 00:18:14.404 fused_ordering(239) 00:18:14.404 fused_ordering(240) 00:18:14.404 fused_ordering(241) 00:18:14.404 fused_ordering(242) 00:18:14.404 fused_ordering(243) 00:18:14.404 fused_ordering(244) 00:18:14.404 fused_ordering(245) 00:18:14.404 fused_ordering(246) 00:18:14.404 fused_ordering(247) 00:18:14.404 fused_ordering(248) 00:18:14.404 fused_ordering(249) 00:18:14.404 fused_ordering(250) 00:18:14.404 fused_ordering(251) 00:18:14.404 fused_ordering(252) 00:18:14.404 fused_ordering(253) 00:18:14.404 fused_ordering(254) 00:18:14.404 fused_ordering(255) 00:18:14.404 fused_ordering(256) 00:18:14.404 fused_ordering(257) 00:18:14.404 fused_ordering(258) 00:18:14.404 fused_ordering(259) 00:18:14.404 fused_ordering(260) 00:18:14.404 fused_ordering(261) 00:18:14.404 fused_ordering(262) 00:18:14.404 fused_ordering(263) 00:18:14.404 fused_ordering(264) 00:18:14.404 fused_ordering(265) 00:18:14.404 fused_ordering(266) 00:18:14.404 fused_ordering(267) 00:18:14.404 fused_ordering(268) 00:18:14.404 fused_ordering(269) 00:18:14.404 fused_ordering(270) 00:18:14.404 fused_ordering(271) 00:18:14.404 fused_ordering(272) 00:18:14.404 fused_ordering(273) 00:18:14.404 fused_ordering(274) 00:18:14.404 fused_ordering(275) 00:18:14.404 fused_ordering(276) 00:18:14.404 fused_ordering(277) 00:18:14.404 fused_ordering(278) 00:18:14.404 fused_ordering(279) 00:18:14.404 fused_ordering(280) 00:18:14.404 fused_ordering(281) 00:18:14.404 fused_ordering(282) 00:18:14.404 fused_ordering(283) 00:18:14.404 fused_ordering(284) 00:18:14.404 fused_ordering(285) 00:18:14.404 fused_ordering(286) 00:18:14.404 fused_ordering(287) 00:18:14.404 fused_ordering(288) 00:18:14.404 fused_ordering(289) 00:18:14.404 fused_ordering(290) 00:18:14.404 fused_ordering(291) 00:18:14.404 fused_ordering(292) 00:18:14.404 fused_ordering(293) 00:18:14.404 fused_ordering(294) 00:18:14.404 fused_ordering(295) 00:18:14.404 fused_ordering(296) 00:18:14.404 fused_ordering(297) 00:18:14.404 fused_ordering(298) 00:18:14.404 fused_ordering(299) 00:18:14.404 fused_ordering(300) 00:18:14.404 fused_ordering(301) 00:18:14.404 fused_ordering(302) 00:18:14.404 fused_ordering(303) 00:18:14.404 fused_ordering(304) 00:18:14.404 fused_ordering(305) 00:18:14.404 fused_ordering(306) 00:18:14.404 fused_ordering(307) 00:18:14.404 fused_ordering(308) 00:18:14.404 fused_ordering(309) 00:18:14.404 fused_ordering(310) 00:18:14.404 fused_ordering(311) 00:18:14.404 fused_ordering(312) 00:18:14.404 fused_ordering(313) 00:18:14.405 fused_ordering(314) 00:18:14.405 fused_ordering(315) 00:18:14.405 fused_ordering(316) 00:18:14.405 fused_ordering(317) 00:18:14.405 fused_ordering(318) 00:18:14.405 fused_ordering(319) 00:18:14.405 fused_ordering(320) 00:18:14.405 fused_ordering(321) 00:18:14.405 fused_ordering(322) 00:18:14.405 fused_ordering(323) 00:18:14.405 fused_ordering(324) 00:18:14.405 fused_ordering(325) 00:18:14.405 fused_ordering(326) 00:18:14.405 fused_ordering(327) 00:18:14.405 fused_ordering(328) 00:18:14.405 fused_ordering(329) 00:18:14.405 fused_ordering(330) 00:18:14.405 fused_ordering(331) 00:18:14.405 fused_ordering(332) 00:18:14.405 fused_ordering(333) 00:18:14.405 fused_ordering(334) 00:18:14.405 fused_ordering(335) 00:18:14.405 fused_ordering(336) 00:18:14.405 fused_ordering(337) 00:18:14.405 fused_ordering(338) 00:18:14.405 fused_ordering(339) 00:18:14.405 fused_ordering(340) 00:18:14.405 fused_ordering(341) 00:18:14.405 fused_ordering(342) 00:18:14.405 fused_ordering(343) 00:18:14.405 fused_ordering(344) 00:18:14.405 fused_ordering(345) 00:18:14.405 fused_ordering(346) 00:18:14.405 fused_ordering(347) 00:18:14.405 fused_ordering(348) 00:18:14.405 fused_ordering(349) 00:18:14.405 fused_ordering(350) 00:18:14.405 fused_ordering(351) 00:18:14.405 fused_ordering(352) 00:18:14.405 fused_ordering(353) 00:18:14.405 fused_ordering(354) 00:18:14.405 fused_ordering(355) 00:18:14.405 fused_ordering(356) 00:18:14.405 fused_ordering(357) 00:18:14.405 fused_ordering(358) 00:18:14.405 fused_ordering(359) 00:18:14.405 fused_ordering(360) 00:18:14.405 fused_ordering(361) 00:18:14.405 fused_ordering(362) 00:18:14.405 fused_ordering(363) 00:18:14.405 fused_ordering(364) 00:18:14.405 fused_ordering(365) 00:18:14.405 fused_ordering(366) 00:18:14.405 fused_ordering(367) 00:18:14.405 fused_ordering(368) 00:18:14.405 fused_ordering(369) 00:18:14.405 fused_ordering(370) 00:18:14.405 fused_ordering(371) 00:18:14.405 fused_ordering(372) 00:18:14.405 fused_ordering(373) 00:18:14.405 fused_ordering(374) 00:18:14.405 fused_ordering(375) 00:18:14.405 fused_ordering(376) 00:18:14.405 fused_ordering(377) 00:18:14.405 fused_ordering(378) 00:18:14.405 fused_ordering(379) 00:18:14.405 fused_ordering(380) 00:18:14.405 fused_ordering(381) 00:18:14.405 fused_ordering(382) 00:18:14.405 fused_ordering(383) 00:18:14.405 fused_ordering(384) 00:18:14.405 fused_ordering(385) 00:18:14.405 fused_ordering(386) 00:18:14.405 fused_ordering(387) 00:18:14.405 fused_ordering(388) 00:18:14.405 fused_ordering(389) 00:18:14.405 fused_ordering(390) 00:18:14.405 fused_ordering(391) 00:18:14.405 fused_ordering(392) 00:18:14.405 fused_ordering(393) 00:18:14.405 fused_ordering(394) 00:18:14.405 fused_ordering(395) 00:18:14.405 fused_ordering(396) 00:18:14.405 fused_ordering(397) 00:18:14.405 fused_ordering(398) 00:18:14.405 fused_ordering(399) 00:18:14.405 fused_ordering(400) 00:18:14.405 fused_ordering(401) 00:18:14.405 fused_ordering(402) 00:18:14.405 fused_ordering(403) 00:18:14.405 fused_ordering(404) 00:18:14.405 fused_ordering(405) 00:18:14.405 fused_ordering(406) 00:18:14.405 fused_ordering(407) 00:18:14.405 fused_ordering(408) 00:18:14.405 fused_ordering(409) 00:18:14.405 fused_ordering(410) 00:18:14.665 fused_ordering(411) 00:18:14.665 fused_ordering(412) 00:18:14.665 fused_ordering(413) 00:18:14.665 fused_ordering(414) 00:18:14.665 fused_ordering(415) 00:18:14.665 fused_ordering(416) 00:18:14.665 fused_ordering(417) 00:18:14.665 fused_ordering(418) 00:18:14.665 fused_ordering(419) 00:18:14.665 fused_ordering(420) 00:18:14.665 fused_ordering(421) 00:18:14.665 fused_ordering(422) 00:18:14.665 fused_ordering(423) 00:18:14.665 fused_ordering(424) 00:18:14.665 fused_ordering(425) 00:18:14.665 fused_ordering(426) 00:18:14.665 fused_ordering(427) 00:18:14.665 fused_ordering(428) 00:18:14.665 fused_ordering(429) 00:18:14.665 fused_ordering(430) 00:18:14.665 fused_ordering(431) 00:18:14.665 fused_ordering(432) 00:18:14.665 fused_ordering(433) 00:18:14.665 fused_ordering(434) 00:18:14.665 fused_ordering(435) 00:18:14.665 fused_ordering(436) 00:18:14.665 fused_ordering(437) 00:18:14.665 fused_ordering(438) 00:18:14.665 fused_ordering(439) 00:18:14.665 fused_ordering(440) 00:18:14.665 fused_ordering(441) 00:18:14.665 fused_ordering(442) 00:18:14.665 fused_ordering(443) 00:18:14.665 fused_ordering(444) 00:18:14.665 fused_ordering(445) 00:18:14.665 fused_ordering(446) 00:18:14.665 fused_ordering(447) 00:18:14.665 fused_ordering(448) 00:18:14.665 fused_ordering(449) 00:18:14.665 fused_ordering(450) 00:18:14.665 fused_ordering(451) 00:18:14.665 fused_ordering(452) 00:18:14.665 fused_ordering(453) 00:18:14.665 fused_ordering(454) 00:18:14.665 fused_ordering(455) 00:18:14.666 fused_ordering(456) 00:18:14.666 fused_ordering(457) 00:18:14.666 fused_ordering(458) 00:18:14.666 fused_ordering(459) 00:18:14.666 fused_ordering(460) 00:18:14.666 fused_ordering(461) 00:18:14.666 fused_ordering(462) 00:18:14.666 fused_ordering(463) 00:18:14.666 fused_ordering(464) 00:18:14.666 fused_ordering(465) 00:18:14.666 fused_ordering(466) 00:18:14.666 fused_ordering(467) 00:18:14.666 fused_ordering(468) 00:18:14.666 fused_ordering(469) 00:18:14.666 fused_ordering(470) 00:18:14.666 fused_ordering(471) 00:18:14.666 fused_ordering(472) 00:18:14.666 fused_ordering(473) 00:18:14.666 fused_ordering(474) 00:18:14.666 fused_ordering(475) 00:18:14.666 fused_ordering(476) 00:18:14.666 fused_ordering(477) 00:18:14.666 fused_ordering(478) 00:18:14.666 fused_ordering(479) 00:18:14.666 fused_ordering(480) 00:18:14.666 fused_ordering(481) 00:18:14.666 fused_ordering(482) 00:18:14.666 fused_ordering(483) 00:18:14.666 fused_ordering(484) 00:18:14.666 fused_ordering(485) 00:18:14.666 fused_ordering(486) 00:18:14.666 fused_ordering(487) 00:18:14.666 fused_ordering(488) 00:18:14.666 fused_ordering(489) 00:18:14.666 fused_ordering(490) 00:18:14.666 fused_ordering(491) 00:18:14.666 fused_ordering(492) 00:18:14.666 fused_ordering(493) 00:18:14.666 fused_ordering(494) 00:18:14.666 fused_ordering(495) 00:18:14.666 fused_ordering(496) 00:18:14.666 fused_ordering(497) 00:18:14.666 fused_ordering(498) 00:18:14.666 fused_ordering(499) 00:18:14.666 fused_ordering(500) 00:18:14.666 fused_ordering(501) 00:18:14.666 fused_ordering(502) 00:18:14.666 fused_ordering(503) 00:18:14.666 fused_ordering(504) 00:18:14.666 fused_ordering(505) 00:18:14.666 fused_ordering(506) 00:18:14.666 fused_ordering(507) 00:18:14.666 fused_ordering(508) 00:18:14.666 fused_ordering(509) 00:18:14.666 fused_ordering(510) 00:18:14.666 fused_ordering(511) 00:18:14.666 fused_ordering(512) 00:18:14.666 fused_ordering(513) 00:18:14.666 fused_ordering(514) 00:18:14.666 fused_ordering(515) 00:18:14.666 fused_ordering(516) 00:18:14.666 fused_ordering(517) 00:18:14.666 fused_ordering(518) 00:18:14.666 fused_ordering(519) 00:18:14.666 fused_ordering(520) 00:18:14.666 fused_ordering(521) 00:18:14.666 fused_ordering(522) 00:18:14.666 fused_ordering(523) 00:18:14.666 fused_ordering(524) 00:18:14.666 fused_ordering(525) 00:18:14.666 fused_ordering(526) 00:18:14.666 fused_ordering(527) 00:18:14.666 fused_ordering(528) 00:18:14.666 fused_ordering(529) 00:18:14.666 fused_ordering(530) 00:18:14.666 fused_ordering(531) 00:18:14.666 fused_ordering(532) 00:18:14.666 fused_ordering(533) 00:18:14.666 fused_ordering(534) 00:18:14.666 fused_ordering(535) 00:18:14.666 fused_ordering(536) 00:18:14.666 fused_ordering(537) 00:18:14.666 fused_ordering(538) 00:18:14.666 fused_ordering(539) 00:18:14.666 fused_ordering(540) 00:18:14.666 fused_ordering(541) 00:18:14.666 fused_ordering(542) 00:18:14.666 fused_ordering(543) 00:18:14.666 fused_ordering(544) 00:18:14.666 fused_ordering(545) 00:18:14.666 fused_ordering(546) 00:18:14.666 fused_ordering(547) 00:18:14.666 fused_ordering(548) 00:18:14.666 fused_ordering(549) 00:18:14.666 fused_ordering(550) 00:18:14.666 fused_ordering(551) 00:18:14.666 fused_ordering(552) 00:18:14.666 fused_ordering(553) 00:18:14.666 fused_ordering(554) 00:18:14.666 fused_ordering(555) 00:18:14.666 fused_ordering(556) 00:18:14.666 fused_ordering(557) 00:18:14.666 fused_ordering(558) 00:18:14.666 fused_ordering(559) 00:18:14.666 fused_ordering(560) 00:18:14.666 fused_ordering(561) 00:18:14.666 fused_ordering(562) 00:18:14.666 fused_ordering(563) 00:18:14.666 fused_ordering(564) 00:18:14.666 fused_ordering(565) 00:18:14.666 fused_ordering(566) 00:18:14.666 fused_ordering(567) 00:18:14.666 fused_ordering(568) 00:18:14.666 fused_ordering(569) 00:18:14.666 fused_ordering(570) 00:18:14.666 fused_ordering(571) 00:18:14.666 fused_ordering(572) 00:18:14.666 fused_ordering(573) 00:18:14.666 fused_ordering(574) 00:18:14.666 fused_ordering(575) 00:18:14.666 fused_ordering(576) 00:18:14.666 fused_ordering(577) 00:18:14.666 fused_ordering(578) 00:18:14.666 fused_ordering(579) 00:18:14.666 fused_ordering(580) 00:18:14.666 fused_ordering(581) 00:18:14.666 fused_ordering(582) 00:18:14.666 fused_ordering(583) 00:18:14.666 fused_ordering(584) 00:18:14.666 fused_ordering(585) 00:18:14.666 fused_ordering(586) 00:18:14.666 fused_ordering(587) 00:18:14.666 fused_ordering(588) 00:18:14.666 fused_ordering(589) 00:18:14.666 fused_ordering(590) 00:18:14.666 fused_ordering(591) 00:18:14.666 fused_ordering(592) 00:18:14.666 fused_ordering(593) 00:18:14.666 fused_ordering(594) 00:18:14.666 fused_ordering(595) 00:18:14.666 fused_ordering(596) 00:18:14.666 fused_ordering(597) 00:18:14.666 fused_ordering(598) 00:18:14.666 fused_ordering(599) 00:18:14.666 fused_ordering(600) 00:18:14.666 fused_ordering(601) 00:18:14.666 fused_ordering(602) 00:18:14.666 fused_ordering(603) 00:18:14.666 fused_ordering(604) 00:18:14.666 fused_ordering(605) 00:18:14.666 fused_ordering(606) 00:18:14.666 fused_ordering(607) 00:18:14.666 fused_ordering(608) 00:18:14.666 fused_ordering(609) 00:18:14.666 fused_ordering(610) 00:18:14.666 fused_ordering(611) 00:18:14.666 fused_ordering(612) 00:18:14.666 fused_ordering(613) 00:18:14.666 fused_ordering(614) 00:18:14.666 fused_ordering(615) 00:18:15.236 fused_ordering(616) 00:18:15.236 fused_ordering(617) 00:18:15.236 fused_ordering(618) 00:18:15.236 fused_ordering(619) 00:18:15.236 fused_ordering(620) 00:18:15.236 fused_ordering(621) 00:18:15.236 fused_ordering(622) 00:18:15.236 fused_ordering(623) 00:18:15.236 fused_ordering(624) 00:18:15.236 fused_ordering(625) 00:18:15.236 fused_ordering(626) 00:18:15.236 fused_ordering(627) 00:18:15.236 fused_ordering(628) 00:18:15.236 fused_ordering(629) 00:18:15.236 fused_ordering(630) 00:18:15.236 fused_ordering(631) 00:18:15.236 fused_ordering(632) 00:18:15.236 fused_ordering(633) 00:18:15.236 fused_ordering(634) 00:18:15.236 fused_ordering(635) 00:18:15.236 fused_ordering(636) 00:18:15.236 fused_ordering(637) 00:18:15.236 fused_ordering(638) 00:18:15.236 fused_ordering(639) 00:18:15.236 fused_ordering(640) 00:18:15.236 fused_ordering(641) 00:18:15.236 fused_ordering(642) 00:18:15.236 fused_ordering(643) 00:18:15.236 fused_ordering(644) 00:18:15.236 fused_ordering(645) 00:18:15.236 fused_ordering(646) 00:18:15.236 fused_ordering(647) 00:18:15.236 fused_ordering(648) 00:18:15.236 fused_ordering(649) 00:18:15.236 fused_ordering(650) 00:18:15.236 fused_ordering(651) 00:18:15.236 fused_ordering(652) 00:18:15.236 fused_ordering(653) 00:18:15.236 fused_ordering(654) 00:18:15.236 fused_ordering(655) 00:18:15.236 fused_ordering(656) 00:18:15.236 fused_ordering(657) 00:18:15.236 fused_ordering(658) 00:18:15.236 fused_ordering(659) 00:18:15.236 fused_ordering(660) 00:18:15.236 fused_ordering(661) 00:18:15.236 fused_ordering(662) 00:18:15.236 fused_ordering(663) 00:18:15.236 fused_ordering(664) 00:18:15.236 fused_ordering(665) 00:18:15.236 fused_ordering(666) 00:18:15.236 fused_ordering(667) 00:18:15.236 fused_ordering(668) 00:18:15.236 fused_ordering(669) 00:18:15.236 fused_ordering(670) 00:18:15.236 fused_ordering(671) 00:18:15.236 fused_ordering(672) 00:18:15.236 fused_ordering(673) 00:18:15.236 fused_ordering(674) 00:18:15.236 fused_ordering(675) 00:18:15.237 fused_ordering(676) 00:18:15.237 fused_ordering(677) 00:18:15.237 fused_ordering(678) 00:18:15.237 fused_ordering(679) 00:18:15.237 fused_ordering(680) 00:18:15.237 fused_ordering(681) 00:18:15.237 fused_ordering(682) 00:18:15.237 fused_ordering(683) 00:18:15.237 fused_ordering(684) 00:18:15.237 fused_ordering(685) 00:18:15.237 fused_ordering(686) 00:18:15.237 fused_ordering(687) 00:18:15.237 fused_ordering(688) 00:18:15.237 fused_ordering(689) 00:18:15.237 fused_ordering(690) 00:18:15.237 fused_ordering(691) 00:18:15.237 fused_ordering(692) 00:18:15.237 fused_ordering(693) 00:18:15.237 fused_ordering(694) 00:18:15.237 fused_ordering(695) 00:18:15.237 fused_ordering(696) 00:18:15.237 fused_ordering(697) 00:18:15.237 fused_ordering(698) 00:18:15.237 fused_ordering(699) 00:18:15.237 fused_ordering(700) 00:18:15.237 fused_ordering(701) 00:18:15.237 fused_ordering(702) 00:18:15.237 fused_ordering(703) 00:18:15.237 fused_ordering(704) 00:18:15.237 fused_ordering(705) 00:18:15.237 fused_ordering(706) 00:18:15.237 fused_ordering(707) 00:18:15.237 fused_ordering(708) 00:18:15.237 fused_ordering(709) 00:18:15.237 fused_ordering(710) 00:18:15.237 fused_ordering(711) 00:18:15.237 fused_ordering(712) 00:18:15.237 fused_ordering(713) 00:18:15.237 fused_ordering(714) 00:18:15.237 fused_ordering(715) 00:18:15.237 fused_ordering(716) 00:18:15.237 fused_ordering(717) 00:18:15.237 fused_ordering(718) 00:18:15.237 fused_ordering(719) 00:18:15.237 fused_ordering(720) 00:18:15.237 fused_ordering(721) 00:18:15.237 fused_ordering(722) 00:18:15.237 fused_ordering(723) 00:18:15.237 fused_ordering(724) 00:18:15.237 fused_ordering(725) 00:18:15.237 fused_ordering(726) 00:18:15.237 fused_ordering(727) 00:18:15.237 fused_ordering(728) 00:18:15.237 fused_ordering(729) 00:18:15.237 fused_ordering(730) 00:18:15.237 fused_ordering(731) 00:18:15.237 fused_ordering(732) 00:18:15.237 fused_ordering(733) 00:18:15.237 fused_ordering(734) 00:18:15.237 fused_ordering(735) 00:18:15.237 fused_ordering(736) 00:18:15.237 fused_ordering(737) 00:18:15.237 fused_ordering(738) 00:18:15.237 fused_ordering(739) 00:18:15.237 fused_ordering(740) 00:18:15.237 fused_ordering(741) 00:18:15.237 fused_ordering(742) 00:18:15.237 fused_ordering(743) 00:18:15.237 fused_ordering(744) 00:18:15.237 fused_ordering(745) 00:18:15.237 fused_ordering(746) 00:18:15.237 fused_ordering(747) 00:18:15.237 fused_ordering(748) 00:18:15.237 fused_ordering(749) 00:18:15.237 fused_ordering(750) 00:18:15.237 fused_ordering(751) 00:18:15.237 fused_ordering(752) 00:18:15.237 fused_ordering(753) 00:18:15.237 fused_ordering(754) 00:18:15.237 fused_ordering(755) 00:18:15.237 fused_ordering(756) 00:18:15.237 fused_ordering(757) 00:18:15.237 fused_ordering(758) 00:18:15.237 fused_ordering(759) 00:18:15.237 fused_ordering(760) 00:18:15.237 fused_ordering(761) 00:18:15.237 fused_ordering(762) 00:18:15.237 fused_ordering(763) 00:18:15.237 fused_ordering(764) 00:18:15.237 fused_ordering(765) 00:18:15.237 fused_ordering(766) 00:18:15.237 fused_ordering(767) 00:18:15.237 fused_ordering(768) 00:18:15.237 fused_ordering(769) 00:18:15.237 fused_ordering(770) 00:18:15.237 fused_ordering(771) 00:18:15.237 fused_ordering(772) 00:18:15.237 fused_ordering(773) 00:18:15.237 fused_ordering(774) 00:18:15.237 fused_ordering(775) 00:18:15.237 fused_ordering(776) 00:18:15.237 fused_ordering(777) 00:18:15.237 fused_ordering(778) 00:18:15.237 fused_ordering(779) 00:18:15.237 fused_ordering(780) 00:18:15.237 fused_ordering(781) 00:18:15.237 fused_ordering(782) 00:18:15.237 fused_ordering(783) 00:18:15.237 fused_ordering(784) 00:18:15.237 fused_ordering(785) 00:18:15.237 fused_ordering(786) 00:18:15.237 fused_ordering(787) 00:18:15.237 fused_ordering(788) 00:18:15.237 fused_ordering(789) 00:18:15.237 fused_ordering(790) 00:18:15.237 fused_ordering(791) 00:18:15.237 fused_ordering(792) 00:18:15.237 fused_ordering(793) 00:18:15.237 fused_ordering(794) 00:18:15.237 fused_ordering(795) 00:18:15.237 fused_ordering(796) 00:18:15.237 fused_ordering(797) 00:18:15.237 fused_ordering(798) 00:18:15.237 fused_ordering(799) 00:18:15.237 fused_ordering(800) 00:18:15.237 fused_ordering(801) 00:18:15.237 fused_ordering(802) 00:18:15.237 fused_ordering(803) 00:18:15.237 fused_ordering(804) 00:18:15.237 fused_ordering(805) 00:18:15.237 fused_ordering(806) 00:18:15.237 fused_ordering(807) 00:18:15.237 fused_ordering(808) 00:18:15.237 fused_ordering(809) 00:18:15.237 fused_ordering(810) 00:18:15.237 fused_ordering(811) 00:18:15.237 fused_ordering(812) 00:18:15.237 fused_ordering(813) 00:18:15.237 fused_ordering(814) 00:18:15.237 fused_ordering(815) 00:18:15.237 fused_ordering(816) 00:18:15.237 fused_ordering(817) 00:18:15.237 fused_ordering(818) 00:18:15.237 fused_ordering(819) 00:18:15.237 fused_ordering(820) 00:18:16.177 fused_ordering(821) 00:18:16.177 fused_ordering(822) 00:18:16.177 fused_ordering(823) 00:18:16.177 fused_ordering(824) 00:18:16.177 fused_ordering(825) 00:18:16.177 fused_ordering(826) 00:18:16.177 fused_ordering(827) 00:18:16.177 fused_ordering(828) 00:18:16.177 fused_ordering(829) 00:18:16.177 fused_ordering(830) 00:18:16.177 fused_ordering(831) 00:18:16.177 fused_ordering(832) 00:18:16.177 fused_ordering(833) 00:18:16.177 fused_ordering(834) 00:18:16.177 fused_ordering(835) 00:18:16.177 fused_ordering(836) 00:18:16.177 fused_ordering(837) 00:18:16.177 fused_ordering(838) 00:18:16.177 fused_ordering(839) 00:18:16.177 fused_ordering(840) 00:18:16.177 fused_ordering(841) 00:18:16.177 fused_ordering(842) 00:18:16.177 fused_ordering(843) 00:18:16.177 fused_ordering(844) 00:18:16.177 fused_ordering(845) 00:18:16.177 fused_ordering(846) 00:18:16.177 fused_ordering(847) 00:18:16.177 fused_ordering(848) 00:18:16.177 fused_ordering(849) 00:18:16.177 fused_ordering(850) 00:18:16.177 fused_ordering(851) 00:18:16.177 fused_ordering(852) 00:18:16.177 fused_ordering(853) 00:18:16.177 fused_ordering(854) 00:18:16.177 fused_ordering(855) 00:18:16.177 fused_ordering(856) 00:18:16.177 fused_ordering(857) 00:18:16.177 fused_ordering(858) 00:18:16.177 fused_ordering(859) 00:18:16.177 fused_ordering(860) 00:18:16.177 fused_ordering(861) 00:18:16.177 fused_ordering(862) 00:18:16.177 fused_ordering(863) 00:18:16.177 fused_ordering(864) 00:18:16.177 fused_ordering(865) 00:18:16.177 fused_ordering(866) 00:18:16.177 fused_ordering(867) 00:18:16.177 fused_ordering(868) 00:18:16.177 fused_ordering(869) 00:18:16.177 fused_ordering(870) 00:18:16.177 fused_ordering(871) 00:18:16.177 fused_ordering(872) 00:18:16.177 fused_ordering(873) 00:18:16.177 fused_ordering(874) 00:18:16.177 fused_ordering(875) 00:18:16.177 fused_ordering(876) 00:18:16.177 fused_ordering(877) 00:18:16.177 fused_ordering(878) 00:18:16.177 fused_ordering(879) 00:18:16.177 fused_ordering(880) 00:18:16.177 fused_ordering(881) 00:18:16.177 fused_ordering(882) 00:18:16.177 fused_ordering(883) 00:18:16.177 fused_ordering(884) 00:18:16.177 fused_ordering(885) 00:18:16.177 fused_ordering(886) 00:18:16.177 fused_ordering(887) 00:18:16.177 fused_ordering(888) 00:18:16.177 fused_ordering(889) 00:18:16.177 fused_ordering(890) 00:18:16.177 fused_ordering(891) 00:18:16.177 fused_ordering(892) 00:18:16.177 fused_ordering(893) 00:18:16.177 fused_ordering(894) 00:18:16.177 fused_ordering(895) 00:18:16.177 fused_ordering(896) 00:18:16.177 fused_ordering(897) 00:18:16.177 fused_ordering(898) 00:18:16.177 fused_ordering(899) 00:18:16.177 fused_ordering(900) 00:18:16.177 fused_ordering(901) 00:18:16.177 fused_ordering(902) 00:18:16.177 fused_ordering(903) 00:18:16.177 fused_ordering(904) 00:18:16.177 fused_ordering(905) 00:18:16.177 fused_ordering(906) 00:18:16.177 fused_ordering(907) 00:18:16.177 fused_ordering(908) 00:18:16.177 fused_ordering(909) 00:18:16.177 fused_ordering(910) 00:18:16.177 fused_ordering(911) 00:18:16.177 fused_ordering(912) 00:18:16.177 fused_ordering(913) 00:18:16.177 fused_ordering(914) 00:18:16.177 fused_ordering(915) 00:18:16.177 fused_ordering(916) 00:18:16.177 fused_ordering(917) 00:18:16.177 fused_ordering(918) 00:18:16.177 fused_ordering(919) 00:18:16.177 fused_ordering(920) 00:18:16.177 fused_ordering(921) 00:18:16.177 fused_ordering(922) 00:18:16.177 fused_ordering(923) 00:18:16.177 fused_ordering(924) 00:18:16.177 fused_ordering(925) 00:18:16.177 fused_ordering(926) 00:18:16.177 fused_ordering(927) 00:18:16.177 fused_ordering(928) 00:18:16.177 fused_ordering(929) 00:18:16.177 fused_ordering(930) 00:18:16.177 fused_ordering(931) 00:18:16.177 fused_ordering(932) 00:18:16.177 fused_ordering(933) 00:18:16.177 fused_ordering(934) 00:18:16.177 fused_ordering(935) 00:18:16.177 fused_ordering(936) 00:18:16.177 fused_ordering(937) 00:18:16.177 fused_ordering(938) 00:18:16.177 fused_ordering(939) 00:18:16.177 fused_ordering(940) 00:18:16.177 fused_ordering(941) 00:18:16.177 fused_ordering(942) 00:18:16.177 fused_ordering(943) 00:18:16.177 fused_ordering(944) 00:18:16.177 fused_ordering(945) 00:18:16.177 fused_ordering(946) 00:18:16.177 fused_ordering(947) 00:18:16.177 fused_ordering(948) 00:18:16.177 fused_ordering(949) 00:18:16.177 fused_ordering(950) 00:18:16.178 fused_ordering(951) 00:18:16.178 fused_ordering(952) 00:18:16.178 fused_ordering(953) 00:18:16.178 fused_ordering(954) 00:18:16.178 fused_ordering(955) 00:18:16.178 fused_ordering(956) 00:18:16.178 fused_ordering(957) 00:18:16.178 fused_ordering(958) 00:18:16.178 fused_ordering(959) 00:18:16.178 fused_ordering(960) 00:18:16.178 fused_ordering(961) 00:18:16.178 fused_ordering(962) 00:18:16.178 fused_ordering(963) 00:18:16.178 fused_ordering(964) 00:18:16.178 fused_ordering(965) 00:18:16.178 fused_ordering(966) 00:18:16.178 fused_ordering(967) 00:18:16.178 fused_ordering(968) 00:18:16.178 fused_ordering(969) 00:18:16.178 fused_ordering(970) 00:18:16.178 fused_ordering(971) 00:18:16.178 fused_ordering(972) 00:18:16.178 fused_ordering(973) 00:18:16.178 fused_ordering(974) 00:18:16.178 fused_ordering(975) 00:18:16.178 fused_ordering(976) 00:18:16.178 fused_ordering(977) 00:18:16.178 fused_ordering(978) 00:18:16.178 fused_ordering(979) 00:18:16.178 fused_ordering(980) 00:18:16.178 fused_ordering(981) 00:18:16.178 fused_ordering(982) 00:18:16.178 fused_ordering(983) 00:18:16.178 fused_ordering(984) 00:18:16.178 fused_ordering(985) 00:18:16.178 fused_ordering(986) 00:18:16.178 fused_ordering(987) 00:18:16.178 fused_ordering(988) 00:18:16.178 fused_ordering(989) 00:18:16.178 fused_ordering(990) 00:18:16.178 fused_ordering(991) 00:18:16.178 fused_ordering(992) 00:18:16.178 fused_ordering(993) 00:18:16.178 fused_ordering(994) 00:18:16.178 fused_ordering(995) 00:18:16.178 fused_ordering(996) 00:18:16.178 fused_ordering(997) 00:18:16.178 fused_ordering(998) 00:18:16.178 fused_ordering(999) 00:18:16.178 fused_ordering(1000) 00:18:16.178 fused_ordering(1001) 00:18:16.178 fused_ordering(1002) 00:18:16.178 fused_ordering(1003) 00:18:16.178 fused_ordering(1004) 00:18:16.178 fused_ordering(1005) 00:18:16.178 fused_ordering(1006) 00:18:16.178 fused_ordering(1007) 00:18:16.178 fused_ordering(1008) 00:18:16.178 fused_ordering(1009) 00:18:16.178 fused_ordering(1010) 00:18:16.178 fused_ordering(1011) 00:18:16.178 fused_ordering(1012) 00:18:16.178 fused_ordering(1013) 00:18:16.178 fused_ordering(1014) 00:18:16.178 fused_ordering(1015) 00:18:16.178 fused_ordering(1016) 00:18:16.178 fused_ordering(1017) 00:18:16.178 fused_ordering(1018) 00:18:16.178 fused_ordering(1019) 00:18:16.178 fused_ordering(1020) 00:18:16.178 fused_ordering(1021) 00:18:16.178 fused_ordering(1022) 00:18:16.178 fused_ordering(1023) 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.178 rmmod nvme_tcp 00:18:16.178 rmmod nvme_fabrics 00:18:16.178 rmmod nvme_keyring 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 646599 ']' 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 646599 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 646599 ']' 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 646599 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.178 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646599 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646599' 00:18:16.178 killing process with pid 646599 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 646599 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 646599 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.178 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:18.727 00:18:18.727 real 0m13.850s 00:18:18.727 user 0m7.296s 00:18:18.727 sys 0m7.454s 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 ************************************ 00:18:18.727 END TEST nvmf_fused_ordering 00:18:18.727 ************************************ 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 ************************************ 00:18:18.727 START TEST nvmf_ns_masking 00:18:18.727 ************************************ 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:18.727 * Looking for test storage... 00:18:18.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.727 --rc genhtml_branch_coverage=1 00:18:18.727 --rc genhtml_function_coverage=1 00:18:18.727 --rc genhtml_legend=1 00:18:18.727 --rc geninfo_all_blocks=1 00:18:18.727 --rc geninfo_unexecuted_blocks=1 00:18:18.727 00:18:18.727 ' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.727 --rc genhtml_branch_coverage=1 00:18:18.727 --rc genhtml_function_coverage=1 00:18:18.727 --rc genhtml_legend=1 00:18:18.727 --rc geninfo_all_blocks=1 00:18:18.727 --rc geninfo_unexecuted_blocks=1 00:18:18.727 00:18:18.727 ' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.727 --rc genhtml_branch_coverage=1 00:18:18.727 --rc genhtml_function_coverage=1 00:18:18.727 --rc genhtml_legend=1 00:18:18.727 --rc geninfo_all_blocks=1 00:18:18.727 --rc geninfo_unexecuted_blocks=1 00:18:18.727 00:18:18.727 ' 00:18:18.727 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.727 --rc genhtml_branch_coverage=1 00:18:18.727 --rc genhtml_function_coverage=1 00:18:18.727 --rc genhtml_legend=1 00:18:18.727 --rc geninfo_all_blocks=1 00:18:18.727 --rc geninfo_unexecuted_blocks=1 00:18:18.727 00:18:18.727 ' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a52063f6-7664-415f-b806-40e22ab9642a 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=34169449-4222-4119-96d1-b6b44e61821f 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f65ecae4-1e93-4772-9d78-a212d6604c7d 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:18.728 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:26.875 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:26.875 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:26.875 Found net devices under 0000:31:00.0: cvl_0_0 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:26.875 Found net devices under 0000:31:00.1: cvl_0_1 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:26.875 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.876 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:26.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:18:26.876 00:18:26.876 --- 10.0.0.2 ping statistics --- 00:18:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.876 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:18:26.876 00:18:26.876 --- 10.0.0.1 ping statistics --- 00:18:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.876 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=651610 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 651610 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 651610 ']' 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.876 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 [2024-09-30 22:46:53.432146] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:18:26.876 [2024-09-30 22:46:53.432214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.876 [2024-09-30 22:46:53.524966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.876 [2024-09-30 22:46:53.619063] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.876 [2024-09-30 22:46:53.619131] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.876 [2024-09-30 22:46:53.619140] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.876 [2024-09-30 22:46:53.619147] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.876 [2024-09-30 22:46:53.619154] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.876 [2024-09-30 22:46:53.619183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.449 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:27.449 [2024-09-30 22:46:54.461194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.710 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:27.710 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:27.710 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:27.710 Malloc1 00:18:27.971 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:27.971 Malloc2 00:18:27.971 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:28.232 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:28.493 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.493 [2024-09-30 22:46:55.498871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f65ecae4-1e93-4772-9d78-a212d6604c7d -a 10.0.0.2 -s 4420 -i 4 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:28.755 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:30.825 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.086 [ 0]:0x1 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fec402bec27a4e349e00e2bbad7b5057 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fec402bec27a4e349e00e2bbad7b5057 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.086 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.347 [ 0]:0x1 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fec402bec27a4e349e00e2bbad7b5057 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fec402bec27a4e349e00e2bbad7b5057 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.347 [ 1]:0x2 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.347 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:31.608 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:31.867 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:31.867 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f65ecae4-1e93-4772-9d78-a212d6604c7d -a 10.0.0.2 -s 4420 -i 4 00:18:31.867 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:31.867 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.867 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.868 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:31.868 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:31.868 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:34.410 [ 0]:0x2 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.410 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:34.410 [ 0]:0x1 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fec402bec27a4e349e00e2bbad7b5057 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fec402bec27a4e349e00e2bbad7b5057 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:34.410 [ 1]:0x2 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.410 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:34.671 [ 0]:0x2 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:34.671 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.932 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:34.932 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:34.932 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f65ecae4-1e93-4772-9d78-a212d6604c7d -a 10.0.0.2 -s 4420 -i 4 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:35.191 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:37.104 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:37.364 [ 0]:0x1 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fec402bec27a4e349e00e2bbad7b5057 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fec402bec27a4e349e00e2bbad7b5057 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:37.364 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:37.364 [ 1]:0x2 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:37.625 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:37.886 [ 0]:0x2 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:37.886 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:37.886 [2024-09-30 22:47:04.900521] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:38.147 request: 00:18:38.148 { 00:18:38.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.148 "nsid": 2, 00:18:38.148 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.148 "method": "nvmf_ns_remove_host", 00:18:38.148 "req_id": 1 00:18:38.148 } 00:18:38.148 Got JSON-RPC error response 00:18:38.148 response: 00:18:38.148 { 00:18:38.148 "code": -32602, 00:18:38.148 "message": "Invalid parameters" 00:18:38.148 } 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:38.148 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:38.148 [ 0]:0x2 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03863507555043a3aca1116a4f87daeb 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03863507555043a3aca1116a4f87daeb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:38.148 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:38.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=653933 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 653933 /var/tmp/host.sock 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 653933 ']' 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:38.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.409 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:38.409 [2024-09-30 22:47:05.250324] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:18:38.409 [2024-09-30 22:47:05.250375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653933 ] 00:18:38.409 [2024-09-30 22:47:05.331107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.409 [2024-09-30 22:47:05.395757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.351 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.351 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:39.351 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.351 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a52063f6-7664-415f-b806-40e22ab9642a 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A52063F67664415FB80640E22AB9642A -i 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 34169449-4222-4119-96d1-b6b44e61821f 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:39.611 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 341694494222411996D1B6B44E61821F -i 00:18:39.872 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:40.134 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:40.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:40.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:40.393 nvme0n1 00:18:40.654 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:40.654 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:40.914 nvme1n2 00:18:40.914 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:40.914 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:40.914 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:40.914 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:40.914 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:41.175 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:41.175 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:41.175 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:41.175 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:41.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a52063f6-7664-415f-b806-40e22ab9642a == \a\5\2\0\6\3\f\6\-\7\6\6\4\-\4\1\5\f\-\b\8\0\6\-\4\0\e\2\2\a\b\9\6\4\2\a ]] 00:18:41.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:41.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:41.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 34169449-4222-4119-96d1-b6b44e61821f == \3\4\1\6\9\4\4\9\-\4\2\2\2\-\4\1\1\9\-\9\6\d\1\-\b\6\b\4\4\e\6\1\8\2\1\f ]] 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 653933 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 653933 ']' 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 653933 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 653933 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 653933' 00:18:41.436 killing process with pid 653933 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 653933 00:18:41.436 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 653933 00:18:41.697 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.957 rmmod nvme_tcp 00:18:41.957 rmmod nvme_fabrics 00:18:41.957 rmmod nvme_keyring 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.957 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 651610 ']' 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 651610 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 651610 ']' 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 651610 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651610 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651610' 00:18:41.958 killing process with pid 651610 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 651610 00:18:41.958 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 651610 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.217 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.760 00:18:44.760 real 0m25.830s 00:18:44.760 user 0m26.130s 00:18:44.760 sys 0m8.066s 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.760 ************************************ 00:18:44.760 END TEST nvmf_ns_masking 00:18:44.760 ************************************ 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.760 ************************************ 00:18:44.760 START TEST nvmf_nvme_cli 00:18:44.760 ************************************ 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:44.760 * Looking for test storage... 00:18:44.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.760 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.761 --rc genhtml_branch_coverage=1 00:18:44.761 --rc genhtml_function_coverage=1 00:18:44.761 --rc genhtml_legend=1 00:18:44.761 --rc geninfo_all_blocks=1 00:18:44.761 --rc geninfo_unexecuted_blocks=1 00:18:44.761 00:18:44.761 ' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.761 --rc genhtml_branch_coverage=1 00:18:44.761 --rc genhtml_function_coverage=1 00:18:44.761 --rc genhtml_legend=1 00:18:44.761 --rc geninfo_all_blocks=1 00:18:44.761 --rc geninfo_unexecuted_blocks=1 00:18:44.761 00:18:44.761 ' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.761 --rc genhtml_branch_coverage=1 00:18:44.761 --rc genhtml_function_coverage=1 00:18:44.761 --rc genhtml_legend=1 00:18:44.761 --rc geninfo_all_blocks=1 00:18:44.761 --rc geninfo_unexecuted_blocks=1 00:18:44.761 00:18:44.761 ' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:44.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.761 --rc genhtml_branch_coverage=1 00:18:44.761 --rc genhtml_function_coverage=1 00:18:44.761 --rc genhtml_legend=1 00:18:44.761 --rc geninfo_all_blocks=1 00:18:44.761 --rc geninfo_unexecuted_blocks=1 00:18:44.761 00:18:44.761 ' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:52.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:52.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:52.900 Found net devices under 0000:31:00.0: cvl_0_0 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:52.900 Found net devices under 0000:31:00.1: cvl_0_1 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.900 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.900 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:18:52.901 00:18:52.901 --- 10.0.0.2 ping statistics --- 00:18:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.901 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:18:52.901 00:18:52.901 --- 10.0.0.1 ping statistics --- 00:18:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.901 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=659011 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 659011 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 659011 ']' 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.901 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.901 [2024-09-30 22:47:19.287315] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:18:52.901 [2024-09-30 22:47:19.287384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.901 [2024-09-30 22:47:19.378594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.901 [2024-09-30 22:47:19.478568] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.901 [2024-09-30 22:47:19.478632] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.901 [2024-09-30 22:47:19.478642] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.901 [2024-09-30 22:47:19.478649] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.901 [2024-09-30 22:47:19.478655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.901 [2024-09-30 22:47:19.478761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.901 [2024-09-30 22:47:19.478941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.901 [2024-09-30 22:47:19.479101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.901 [2024-09-30 22:47:19.479101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.162 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 [2024-09-30 22:47:20.170884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 Malloc0 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 Malloc1 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 [2024-09-30 22:47:20.272338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.422 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:18:53.683 00:18:53.683 Discovery Log Number of Records 2, Generation counter 2 00:18:53.683 =====Discovery Log Entry 0====== 00:18:53.683 trtype: tcp 00:18:53.683 adrfam: ipv4 00:18:53.683 subtype: current discovery subsystem 00:18:53.683 treq: not required 00:18:53.683 portid: 0 00:18:53.683 trsvcid: 4420 00:18:53.683 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:53.683 traddr: 10.0.0.2 00:18:53.683 eflags: explicit discovery connections, duplicate discovery information 00:18:53.683 sectype: none 00:18:53.683 =====Discovery Log Entry 1====== 00:18:53.683 trtype: tcp 00:18:53.683 adrfam: ipv4 00:18:53.683 subtype: nvme subsystem 00:18:53.683 treq: not required 00:18:53.683 portid: 0 00:18:53.683 trsvcid: 4420 00:18:53.683 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:53.683 traddr: 10.0.0.2 00:18:53.683 eflags: none 00:18:53.683 sectype: none 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:53.683 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:55.594 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:57.506 /dev/nvme0n2 ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:57.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.506 rmmod nvme_tcp 00:18:57.506 rmmod nvme_fabrics 00:18:57.506 rmmod nvme_keyring 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:57.506 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 659011 ']' 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 659011 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 659011 ']' 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 659011 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 659011 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 659011' 00:18:57.507 killing process with pid 659011 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 659011 00:18:57.507 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 659011 00:18:57.766 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.767 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.678 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:59.678 00:18:59.678 real 0m15.398s 00:18:59.678 user 0m22.785s 00:18:59.678 sys 0m6.369s 00:18:59.678 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.678 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:59.678 ************************************ 00:18:59.678 END TEST nvmf_nvme_cli 00:18:59.678 ************************************ 00:18:59.678 22:47:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:59.678 22:47:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.938 ************************************ 00:18:59.938 START TEST nvmf_vfio_user 00:18:59.938 ************************************ 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:59.938 * Looking for test storage... 00:18:59.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:59.938 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.939 --rc genhtml_branch_coverage=1 00:18:59.939 --rc genhtml_function_coverage=1 00:18:59.939 --rc genhtml_legend=1 00:18:59.939 --rc geninfo_all_blocks=1 00:18:59.939 --rc geninfo_unexecuted_blocks=1 00:18:59.939 00:18:59.939 ' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.939 --rc genhtml_branch_coverage=1 00:18:59.939 --rc genhtml_function_coverage=1 00:18:59.939 --rc genhtml_legend=1 00:18:59.939 --rc geninfo_all_blocks=1 00:18:59.939 --rc geninfo_unexecuted_blocks=1 00:18:59.939 00:18:59.939 ' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.939 --rc genhtml_branch_coverage=1 00:18:59.939 --rc genhtml_function_coverage=1 00:18:59.939 --rc genhtml_legend=1 00:18:59.939 --rc geninfo_all_blocks=1 00:18:59.939 --rc geninfo_unexecuted_blocks=1 00:18:59.939 00:18:59.939 ' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.939 --rc genhtml_branch_coverage=1 00:18:59.939 --rc genhtml_function_coverage=1 00:18:59.939 --rc genhtml_legend=1 00:18:59.939 --rc geninfo_all_blocks=1 00:18:59.939 --rc geninfo_unexecuted_blocks=1 00:18:59.939 00:18:59.939 ' 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.939 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=660804 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 660804' 00:19:00.200 Process pid: 660804 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 660804 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 660804 ']' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.200 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.201 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.201 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.201 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:00.201 [2024-09-30 22:47:27.026033] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:19:00.201 [2024-09-30 22:47:27.026087] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.201 [2024-09-30 22:47:27.101871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.201 [2024-09-30 22:47:27.157637] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.201 [2024-09-30 22:47:27.157670] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.201 [2024-09-30 22:47:27.157676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.201 [2024-09-30 22:47:27.157681] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.201 [2024-09-30 22:47:27.157686] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.201 [2024-09-30 22:47:27.157825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.201 [2024-09-30 22:47:27.157952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.201 [2024-09-30 22:47:27.158286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.201 [2024-09-30 22:47:27.158286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.142 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.142 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:01.142 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:02.085 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:02.085 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:02.085 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:02.085 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:02.085 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:02.085 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:02.345 Malloc1 00:19:02.345 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:02.607 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:02.607 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:02.867 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:02.867 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:02.867 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:03.129 Malloc2 00:19:03.129 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:03.129 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:03.390 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:03.655 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:03.656 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:03.656 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:03.656 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:03.656 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:03.656 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:03.656 [2024-09-30 22:47:30.513125] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:19:03.656 [2024-09-30 22:47:30.513147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661497 ] 00:19:03.656 [2024-09-30 22:47:30.539006] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:03.656 [2024-09-30 22:47:30.549173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:03.656 [2024-09-30 22:47:30.549191] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8b6323e000 00:19:03.656 [2024-09-30 22:47:30.550178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.551174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.552185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.553186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.554197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.555201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.556209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.557209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.656 [2024-09-30 22:47:30.558217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:03.656 [2024-09-30 22:47:30.558224] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8b63233000 00:19:03.656 [2024-09-30 22:47:30.559140] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:03.656 [2024-09-30 22:47:30.567588] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:03.656 [2024-09-30 22:47:30.567616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:03.656 [2024-09-30 22:47:30.572302] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:03.656 [2024-09-30 22:47:30.572336] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:03.656 [2024-09-30 22:47:30.572398] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:03.656 [2024-09-30 22:47:30.572414] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:03.656 [2024-09-30 22:47:30.572418] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:03.656 [2024-09-30 22:47:30.573308] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:03.656 [2024-09-30 22:47:30.573318] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:03.656 [2024-09-30 22:47:30.573323] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:03.656 [2024-09-30 22:47:30.574316] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:03.656 [2024-09-30 22:47:30.574322] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:03.656 [2024-09-30 22:47:30.574327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.575322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:03.656 [2024-09-30 22:47:30.575328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.576324] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:03.656 [2024-09-30 22:47:30.576331] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:03.656 [2024-09-30 22:47:30.576334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.576339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.576443] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:03.656 [2024-09-30 22:47:30.576447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.576450] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:03.656 [2024-09-30 22:47:30.577336] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:03.656 [2024-09-30 22:47:30.578342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:03.656 [2024-09-30 22:47:30.579346] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:03.656 [2024-09-30 22:47:30.580343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:03.656 [2024-09-30 22:47:30.580413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:03.656 [2024-09-30 22:47:30.581354] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:03.656 [2024-09-30 22:47:30.581359] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:03.656 [2024-09-30 22:47:30.581363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581378] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:03.656 [2024-09-30 22:47:30.581383] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581395] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.656 [2024-09-30 22:47:30.581401] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.656 [2024-09-30 22:47:30.581404] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.656 [2024-09-30 22:47:30.581415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.656 [2024-09-30 22:47:30.581456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:03.656 [2024-09-30 22:47:30.581463] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:03.656 [2024-09-30 22:47:30.581467] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:03.656 [2024-09-30 22:47:30.581470] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:03.656 [2024-09-30 22:47:30.581473] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:03.656 [2024-09-30 22:47:30.581477] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:03.656 [2024-09-30 22:47:30.581480] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:03.656 [2024-09-30 22:47:30.581483] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:03.656 [2024-09-30 22:47:30.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:03.656 [2024-09-30 22:47:30.581517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.656 [2024-09-30 22:47:30.581523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.656 [2024-09-30 22:47:30.581529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.656 [2024-09-30 22:47:30.581535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.656 [2024-09-30 22:47:30.581538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:03.656 [2024-09-30 22:47:30.581551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:03.656 [2024-09-30 22:47:30.581560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:03.656 [2024-09-30 22:47:30.581564] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:03.656 [2024-09-30 22:47:30.581567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581656] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:03.657 [2024-09-30 22:47:30.581659] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:03.657 [2024-09-30 22:47:30.581661] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581687] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:03.657 [2024-09-30 22:47:30.581693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581705] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.657 [2024-09-30 22:47:30.581708] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.657 [2024-09-30 22:47:30.581711] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581751] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.657 [2024-09-30 22:47:30.581754] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.657 [2024-09-30 22:47:30.581756] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581781] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581796] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581803] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:03.657 [2024-09-30 22:47:30.581806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:03.657 [2024-09-30 22:47:30.581810] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:03.657 [2024-09-30 22:47:30.581824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581902] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:03.657 [2024-09-30 22:47:30.581905] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:03.657 [2024-09-30 22:47:30.581908] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:03.657 [2024-09-30 22:47:30.581911] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:03.657 [2024-09-30 22:47:30.581913] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:03.657 [2024-09-30 22:47:30.581917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:03.657 [2024-09-30 22:47:30.581923] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:03.657 [2024-09-30 22:47:30.581926] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:03.657 [2024-09-30 22:47:30.581928] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581937] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:03.657 [2024-09-30 22:47:30.581940] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.657 [2024-09-30 22:47:30.581944] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581954] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:03.657 [2024-09-30 22:47:30.581957] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:03.657 [2024-09-30 22:47:30.581959] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.657 [2024-09-30 22:47:30.581963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:03.657 [2024-09-30 22:47:30.581968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:03.657 [2024-09-30 22:47:30.581990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:03.657 ===================================================== 00:19:03.657 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:03.657 ===================================================== 00:19:03.657 Controller Capabilities/Features 00:19:03.657 ================================ 00:19:03.657 Vendor ID: 4e58 00:19:03.657 Subsystem Vendor ID: 4e58 00:19:03.657 Serial Number: SPDK1 00:19:03.657 Model Number: SPDK bdev Controller 00:19:03.657 Firmware Version: 25.01 00:19:03.657 Recommended Arb Burst: 6 00:19:03.657 IEEE OUI Identifier: 8d 6b 50 00:19:03.657 Multi-path I/O 00:19:03.657 May have multiple subsystem ports: Yes 00:19:03.657 May have multiple controllers: Yes 00:19:03.657 Associated with SR-IOV VF: No 00:19:03.657 Max Data Transfer Size: 131072 00:19:03.657 Max Number of Namespaces: 32 00:19:03.657 Max Number of I/O Queues: 127 00:19:03.657 NVMe Specification Version (VS): 1.3 00:19:03.657 NVMe Specification Version (Identify): 1.3 00:19:03.657 Maximum Queue Entries: 256 00:19:03.657 Contiguous Queues Required: Yes 00:19:03.657 Arbitration Mechanisms Supported 00:19:03.657 Weighted Round Robin: Not Supported 00:19:03.657 Vendor Specific: Not Supported 00:19:03.657 Reset Timeout: 15000 ms 00:19:03.657 Doorbell Stride: 4 bytes 00:19:03.657 NVM Subsystem Reset: Not Supported 00:19:03.657 Command Sets Supported 00:19:03.657 NVM Command Set: Supported 00:19:03.657 Boot Partition: Not Supported 00:19:03.657 Memory Page Size Minimum: 4096 bytes 00:19:03.657 Memory Page Size Maximum: 4096 bytes 00:19:03.657 Persistent Memory Region: Not Supported 00:19:03.657 Optional Asynchronous Events Supported 00:19:03.657 Namespace Attribute Notices: Supported 00:19:03.657 Firmware Activation Notices: Not Supported 00:19:03.657 ANA Change Notices: Not Supported 00:19:03.657 PLE Aggregate Log Change Notices: Not Supported 00:19:03.658 LBA Status Info Alert Notices: Not Supported 00:19:03.658 EGE Aggregate Log Change Notices: Not Supported 00:19:03.658 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.658 Zone Descriptor Change Notices: Not Supported 00:19:03.658 Discovery Log Change Notices: Not Supported 00:19:03.658 Controller Attributes 00:19:03.658 128-bit Host Identifier: Supported 00:19:03.658 Non-Operational Permissive Mode: Not Supported 00:19:03.658 NVM Sets: Not Supported 00:19:03.658 Read Recovery Levels: Not Supported 00:19:03.658 Endurance Groups: Not Supported 00:19:03.658 Predictable Latency Mode: Not Supported 00:19:03.658 Traffic Based Keep ALive: Not Supported 00:19:03.658 Namespace Granularity: Not Supported 00:19:03.658 SQ Associations: Not Supported 00:19:03.658 UUID List: Not Supported 00:19:03.658 Multi-Domain Subsystem: Not Supported 00:19:03.658 Fixed Capacity Management: Not Supported 00:19:03.658 Variable Capacity Management: Not Supported 00:19:03.658 Delete Endurance Group: Not Supported 00:19:03.658 Delete NVM Set: Not Supported 00:19:03.658 Extended LBA Formats Supported: Not Supported 00:19:03.658 Flexible Data Placement Supported: Not Supported 00:19:03.658 00:19:03.658 Controller Memory Buffer Support 00:19:03.658 ================================ 00:19:03.658 Supported: No 00:19:03.658 00:19:03.658 Persistent Memory Region Support 00:19:03.658 ================================ 00:19:03.658 Supported: No 00:19:03.658 00:19:03.658 Admin Command Set Attributes 00:19:03.658 ============================ 00:19:03.658 Security Send/Receive: Not Supported 00:19:03.658 Format NVM: Not Supported 00:19:03.658 Firmware Activate/Download: Not Supported 00:19:03.658 Namespace Management: Not Supported 00:19:03.658 Device Self-Test: Not Supported 00:19:03.658 Directives: Not Supported 00:19:03.658 NVMe-MI: Not Supported 00:19:03.658 Virtualization Management: Not Supported 00:19:03.658 Doorbell Buffer Config: Not Supported 00:19:03.658 Get LBA Status Capability: Not Supported 00:19:03.658 Command & Feature Lockdown Capability: Not Supported 00:19:03.658 Abort Command Limit: 4 00:19:03.658 Async Event Request Limit: 4 00:19:03.658 Number of Firmware Slots: N/A 00:19:03.658 Firmware Slot 1 Read-Only: N/A 00:19:03.658 Firmware Activation Without Reset: N/A 00:19:03.658 Multiple Update Detection Support: N/A 00:19:03.658 Firmware Update Granularity: No Information Provided 00:19:03.658 Per-Namespace SMART Log: No 00:19:03.658 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.658 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:03.658 Command Effects Log Page: Supported 00:19:03.658 Get Log Page Extended Data: Supported 00:19:03.658 Telemetry Log Pages: Not Supported 00:19:03.658 Persistent Event Log Pages: Not Supported 00:19:03.658 Supported Log Pages Log Page: May Support 00:19:03.658 Commands Supported & Effects Log Page: Not Supported 00:19:03.658 Feature Identifiers & Effects Log Page:May Support 00:19:03.658 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.658 Data Area 4 for Telemetry Log: Not Supported 00:19:03.658 Error Log Page Entries Supported: 128 00:19:03.658 Keep Alive: Supported 00:19:03.658 Keep Alive Granularity: 10000 ms 00:19:03.658 00:19:03.658 NVM Command Set Attributes 00:19:03.658 ========================== 00:19:03.658 Submission Queue Entry Size 00:19:03.658 Max: 64 00:19:03.658 Min: 64 00:19:03.658 Completion Queue Entry Size 00:19:03.658 Max: 16 00:19:03.658 Min: 16 00:19:03.658 Number of Namespaces: 32 00:19:03.658 Compare Command: Supported 00:19:03.658 Write Uncorrectable Command: Not Supported 00:19:03.658 Dataset Management Command: Supported 00:19:03.658 Write Zeroes Command: Supported 00:19:03.658 Set Features Save Field: Not Supported 00:19:03.658 Reservations: Not Supported 00:19:03.658 Timestamp: Not Supported 00:19:03.658 Copy: Supported 00:19:03.658 Volatile Write Cache: Present 00:19:03.658 Atomic Write Unit (Normal): 1 00:19:03.658 Atomic Write Unit (PFail): 1 00:19:03.658 Atomic Compare & Write Unit: 1 00:19:03.658 Fused Compare & Write: Supported 00:19:03.658 Scatter-Gather List 00:19:03.658 SGL Command Set: Supported (Dword aligned) 00:19:03.658 SGL Keyed: Not Supported 00:19:03.658 SGL Bit Bucket Descriptor: Not Supported 00:19:03.658 SGL Metadata Pointer: Not Supported 00:19:03.658 Oversized SGL: Not Supported 00:19:03.658 SGL Metadata Address: Not Supported 00:19:03.658 SGL Offset: Not Supported 00:19:03.658 Transport SGL Data Block: Not Supported 00:19:03.658 Replay Protected Memory Block: Not Supported 00:19:03.658 00:19:03.658 Firmware Slot Information 00:19:03.658 ========================= 00:19:03.658 Active slot: 1 00:19:03.658 Slot 1 Firmware Revision: 25.01 00:19:03.658 00:19:03.658 00:19:03.658 Commands Supported and Effects 00:19:03.658 ============================== 00:19:03.658 Admin Commands 00:19:03.658 -------------- 00:19:03.658 Get Log Page (02h): Supported 00:19:03.658 Identify (06h): Supported 00:19:03.658 Abort (08h): Supported 00:19:03.658 Set Features (09h): Supported 00:19:03.658 Get Features (0Ah): Supported 00:19:03.658 Asynchronous Event Request (0Ch): Supported 00:19:03.658 Keep Alive (18h): Supported 00:19:03.658 I/O Commands 00:19:03.658 ------------ 00:19:03.658 Flush (00h): Supported LBA-Change 00:19:03.658 Write (01h): Supported LBA-Change 00:19:03.658 Read (02h): Supported 00:19:03.658 Compare (05h): Supported 00:19:03.658 Write Zeroes (08h): Supported LBA-Change 00:19:03.658 Dataset Management (09h): Supported LBA-Change 00:19:03.658 Copy (19h): Supported LBA-Change 00:19:03.658 00:19:03.658 Error Log 00:19:03.658 ========= 00:19:03.658 00:19:03.658 Arbitration 00:19:03.658 =========== 00:19:03.658 Arbitration Burst: 1 00:19:03.658 00:19:03.658 Power Management 00:19:03.658 ================ 00:19:03.658 Number of Power States: 1 00:19:03.658 Current Power State: Power State #0 00:19:03.658 Power State #0: 00:19:03.658 Max Power: 0.00 W 00:19:03.658 Non-Operational State: Operational 00:19:03.658 Entry Latency: Not Reported 00:19:03.658 Exit Latency: Not Reported 00:19:03.658 Relative Read Throughput: 0 00:19:03.658 Relative Read Latency: 0 00:19:03.658 Relative Write Throughput: 0 00:19:03.658 Relative Write Latency: 0 00:19:03.658 Idle Power: Not Reported 00:19:03.658 Active Power: Not Reported 00:19:03.658 Non-Operational Permissive Mode: Not Supported 00:19:03.658 00:19:03.658 Health Information 00:19:03.658 ================== 00:19:03.658 Critical Warnings: 00:19:03.658 Available Spare Space: OK 00:19:03.658 Temperature: OK 00:19:03.658 Device Reliability: OK 00:19:03.658 Read Only: No 00:19:03.658 Volatile Memory Backup: OK 00:19:03.658 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:03.658 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:03.658 Available Spare: 0% 00:19:03.658 Available Sp[2024-09-30 22:47:30.582061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:03.658 [2024-09-30 22:47:30.582072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:03.658 [2024-09-30 22:47:30.582093] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:03.658 [2024-09-30 22:47:30.582100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.658 [2024-09-30 22:47:30.582104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.658 [2024-09-30 22:47:30.582109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.658 [2024-09-30 22:47:30.582113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.658 [2024-09-30 22:47:30.585900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:03.658 [2024-09-30 22:47:30.585909] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:03.658 [2024-09-30 22:47:30.586376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:03.658 [2024-09-30 22:47:30.586425] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:03.658 [2024-09-30 22:47:30.586430] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:03.658 [2024-09-30 22:47:30.587388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:03.658 [2024-09-30 22:47:30.587396] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:03.658 [2024-09-30 22:47:30.587456] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:03.658 [2024-09-30 22:47:30.588408] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:03.658 are Threshold: 0% 00:19:03.658 Life Percentage Used: 0% 00:19:03.658 Data Units Read: 0 00:19:03.658 Data Units Written: 0 00:19:03.658 Host Read Commands: 0 00:19:03.658 Host Write Commands: 0 00:19:03.658 Controller Busy Time: 0 minutes 00:19:03.658 Power Cycles: 0 00:19:03.658 Power On Hours: 0 hours 00:19:03.659 Unsafe Shutdowns: 0 00:19:03.659 Unrecoverable Media Errors: 0 00:19:03.659 Lifetime Error Log Entries: 0 00:19:03.659 Warning Temperature Time: 0 minutes 00:19:03.659 Critical Temperature Time: 0 minutes 00:19:03.659 00:19:03.659 Number of Queues 00:19:03.659 ================ 00:19:03.659 Number of I/O Submission Queues: 127 00:19:03.659 Number of I/O Completion Queues: 127 00:19:03.659 00:19:03.659 Active Namespaces 00:19:03.659 ================= 00:19:03.659 Namespace ID:1 00:19:03.659 Error Recovery Timeout: Unlimited 00:19:03.659 Command Set Identifier: NVM (00h) 00:19:03.659 Deallocate: Supported 00:19:03.659 Deallocated/Unwritten Error: Not Supported 00:19:03.659 Deallocated Read Value: Unknown 00:19:03.659 Deallocate in Write Zeroes: Not Supported 00:19:03.659 Deallocated Guard Field: 0xFFFF 00:19:03.659 Flush: Supported 00:19:03.659 Reservation: Supported 00:19:03.659 Namespace Sharing Capabilities: Multiple Controllers 00:19:03.659 Size (in LBAs): 131072 (0GiB) 00:19:03.659 Capacity (in LBAs): 131072 (0GiB) 00:19:03.659 Utilization (in LBAs): 131072 (0GiB) 00:19:03.659 NGUID: CC07D41D6D2B42F7B40C5866AAFC4B73 00:19:03.659 UUID: cc07d41d-6d2b-42f7-b40c-5866aafc4b73 00:19:03.659 Thin Provisioning: Not Supported 00:19:03.659 Per-NS Atomic Units: Yes 00:19:03.659 Atomic Boundary Size (Normal): 0 00:19:03.659 Atomic Boundary Size (PFail): 0 00:19:03.659 Atomic Boundary Offset: 0 00:19:03.659 Maximum Single Source Range Length: 65535 00:19:03.659 Maximum Copy Length: 65535 00:19:03.659 Maximum Source Range Count: 1 00:19:03.659 NGUID/EUI64 Never Reused: No 00:19:03.659 Namespace Write Protected: No 00:19:03.659 Number of LBA Formats: 1 00:19:03.659 Current LBA Format: LBA Format #00 00:19:03.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.659 00:19:03.659 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:03.919 [2024-09-30 22:47:30.757512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.205 Initializing NVMe Controllers 00:19:09.205 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:09.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:09.205 Initialization complete. Launching workers. 00:19:09.205 ======================================================== 00:19:09.205 Latency(us) 00:19:09.205 Device Information : IOPS MiB/s Average min max 00:19:09.205 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39974.81 156.15 3202.24 847.22 9610.71 00:19:09.205 ======================================================== 00:19:09.205 Total : 39974.81 156.15 3202.24 847.22 9610.71 00:19:09.205 00:19:09.205 [2024-09-30 22:47:35.775078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.205 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:09.205 [2024-09-30 22:47:35.958938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:14.490 Initializing NVMe Controllers 00:19:14.490 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:14.490 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:14.490 Initialization complete. Launching workers. 00:19:14.490 ======================================================== 00:19:14.490 Latency(us) 00:19:14.490 Device Information : IOPS MiB/s Average min max 00:19:14.490 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7985.52 6978.25 10975.31 00:19:14.490 ======================================================== 00:19:14.490 Total : 16051.20 62.70 7985.52 6978.25 10975.31 00:19:14.490 00:19:14.490 [2024-09-30 22:47:40.997005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:14.490 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:14.490 [2024-09-30 22:47:41.187834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:19.776 [2024-09-30 22:47:46.284201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:19.776 Initializing NVMe Controllers 00:19:19.776 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:19.776 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:19.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:19.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:19.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:19.776 Initialization complete. Launching workers. 00:19:19.776 Starting thread on core 2 00:19:19.776 Starting thread on core 3 00:19:19.776 Starting thread on core 1 00:19:19.776 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:19.776 [2024-09-30 22:47:46.523302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:23.075 [2024-09-30 22:47:49.580357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:23.075 Initializing NVMe Controllers 00:19:23.075 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.075 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:23.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:23.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:23.075 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:23.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:23.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:23.075 Initialization complete. Launching workers. 00:19:23.075 Starting thread on core 1 with urgent priority queue 00:19:23.075 Starting thread on core 2 with urgent priority queue 00:19:23.075 Starting thread on core 3 with urgent priority queue 00:19:23.075 Starting thread on core 0 with urgent priority queue 00:19:23.075 SPDK bdev Controller (SPDK1 ) core 0: 11328.67 IO/s 8.83 secs/100000 ios 00:19:23.075 SPDK bdev Controller (SPDK1 ) core 1: 11240.33 IO/s 8.90 secs/100000 ios 00:19:23.075 SPDK bdev Controller (SPDK1 ) core 2: 8186.67 IO/s 12.21 secs/100000 ios 00:19:23.075 SPDK bdev Controller (SPDK1 ) core 3: 12557.00 IO/s 7.96 secs/100000 ios 00:19:23.075 ======================================================== 00:19:23.075 00:19:23.075 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:23.075 [2024-09-30 22:47:49.805253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:23.075 Initializing NVMe Controllers 00:19:23.075 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.075 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.075 Namespace ID: 1 size: 0GB 00:19:23.075 Initialization complete. 00:19:23.075 INFO: using host memory buffer for IO 00:19:23.075 Hello world! 00:19:23.075 [2024-09-30 22:47:49.842465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:23.075 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:23.075 [2024-09-30 22:47:50.061580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:24.459 Initializing NVMe Controllers 00:19:24.459 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.459 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.459 Initialization complete. Launching workers. 00:19:24.459 submit (in ns) avg, min, max = 5315.9, 2822.5, 3999122.5 00:19:24.459 complete (in ns) avg, min, max = 16491.1, 1626.7, 4005354.2 00:19:24.459 00:19:24.459 Submit histogram 00:19:24.459 ================ 00:19:24.459 Range in us Cumulative Count 00:19:24.459 2.813 - 2.827: 0.0344% ( 7) 00:19:24.459 2.827 - 2.840: 0.6681% ( 129) 00:19:24.459 2.840 - 2.853: 2.3629% ( 345) 00:19:24.459 2.853 - 2.867: 5.7919% ( 698) 00:19:24.459 2.867 - 2.880: 10.8027% ( 1020) 00:19:24.459 2.880 - 2.893: 16.1083% ( 1080) 00:19:24.459 2.893 - 2.907: 22.0426% ( 1208) 00:19:24.459 2.907 - 2.920: 28.3111% ( 1276) 00:19:24.459 2.920 - 2.933: 35.1297% ( 1388) 00:19:24.459 2.933 - 2.947: 41.1820% ( 1232) 00:19:24.459 2.947 - 2.960: 46.7528% ( 1134) 00:19:24.459 2.960 - 2.973: 54.5146% ( 1580) 00:19:24.459 2.973 - 2.987: 64.5313% ( 2039) 00:19:24.459 2.987 - 3.000: 74.9214% ( 2115) 00:19:24.459 3.000 - 3.013: 82.8355% ( 1611) 00:19:24.459 3.013 - 3.027: 88.9124% ( 1237) 00:19:24.459 3.027 - 3.040: 93.6481% ( 964) 00:19:24.459 3.040 - 3.053: 96.1780% ( 515) 00:19:24.459 3.053 - 3.067: 97.5634% ( 282) 00:19:24.459 3.067 - 3.080: 98.6736% ( 226) 00:19:24.459 3.080 - 3.093: 99.2091% ( 109) 00:19:24.459 3.093 - 3.107: 99.4400% ( 47) 00:19:24.459 3.107 - 3.120: 99.5235% ( 17) 00:19:24.459 3.120 - 3.133: 99.5333% ( 2) 00:19:24.459 3.147 - 3.160: 99.5382% ( 1) 00:19:24.459 3.240 - 3.253: 99.5431% ( 1) 00:19:24.459 3.320 - 3.333: 99.5530% ( 2) 00:19:24.459 3.347 - 3.360: 99.5579% ( 1) 00:19:24.459 3.387 - 3.400: 99.5628% ( 1) 00:19:24.459 3.413 - 3.440: 99.5677% ( 1) 00:19:24.459 3.547 - 3.573: 99.5873% ( 4) 00:19:24.459 3.573 - 3.600: 99.5923% ( 1) 00:19:24.459 3.600 - 3.627: 99.5972% ( 1) 00:19:24.459 3.653 - 3.680: 99.6021% ( 1) 00:19:24.459 3.760 - 3.787: 99.6070% ( 1) 00:19:24.459 3.867 - 3.893: 99.6168% ( 2) 00:19:24.459 4.160 - 4.187: 99.6266% ( 2) 00:19:24.459 4.400 - 4.427: 99.6316% ( 1) 00:19:24.459 4.427 - 4.453: 99.6365% ( 1) 00:19:24.459 4.453 - 4.480: 99.6414% ( 1) 00:19:24.459 4.533 - 4.560: 99.6463% ( 1) 00:19:24.459 4.667 - 4.693: 99.6512% ( 1) 00:19:24.459 4.720 - 4.747: 99.6561% ( 1) 00:19:24.459 4.827 - 4.853: 99.6610% ( 1) 00:19:24.459 4.933 - 4.960: 99.6709% ( 2) 00:19:24.459 4.960 - 4.987: 99.6758% ( 1) 00:19:24.459 5.013 - 5.040: 99.6807% ( 1) 00:19:24.459 5.093 - 5.120: 99.6856% ( 1) 00:19:24.459 5.120 - 5.147: 99.6954% ( 2) 00:19:24.459 5.147 - 5.173: 99.7052% ( 2) 00:19:24.459 5.173 - 5.200: 99.7102% ( 1) 00:19:24.459 5.360 - 5.387: 99.7151% ( 1) 00:19:24.459 5.493 - 5.520: 99.7200% ( 1) 00:19:24.459 5.600 - 5.627: 99.7249% ( 1) 00:19:24.459 5.920 - 5.947: 99.7298% ( 1) 00:19:24.459 6.027 - 6.053: 99.7347% ( 1) 00:19:24.459 6.133 - 6.160: 99.7445% ( 2) 00:19:24.459 6.213 - 6.240: 99.7495% ( 1) 00:19:24.459 6.240 - 6.267: 99.7544% ( 1) 00:19:24.459 6.267 - 6.293: 99.7593% ( 1) 00:19:24.459 6.293 - 6.320: 99.7642% ( 1) 00:19:24.459 6.320 - 6.347: 99.7740% ( 2) 00:19:24.459 6.347 - 6.373: 99.7838% ( 2) 00:19:24.459 6.400 - 6.427: 99.7888% ( 1) 00:19:24.459 6.453 - 6.480: 99.7937% ( 1) 00:19:24.459 6.533 - 6.560: 99.7986% ( 1) 00:19:24.459 6.560 - 6.587: 99.8035% ( 1) 00:19:24.459 6.587 - 6.613: 99.8133% ( 2) 00:19:24.459 6.613 - 6.640: 99.8231% ( 2) 00:19:24.459 6.640 - 6.667: 99.8281% ( 1) 00:19:24.459 6.800 - 6.827: 99.8330% ( 1) 00:19:24.459 6.880 - 6.933: 99.8526% ( 4) 00:19:24.459 6.933 - 6.987: 99.8575% ( 1) 00:19:24.459 6.987 - 7.040: 99.8674% ( 2) 00:19:24.459 7.253 - 7.307: 99.8723% ( 1) 00:19:24.459 7.307 - 7.360: 99.8772% ( 1) 00:19:24.459 7.360 - 7.413: 99.8919% ( 3) 00:19:24.459 7.413 - 7.467: 99.9017% ( 2) 00:19:24.459 7.627 - 7.680: 99.9116% ( 2) 00:19:24.459 [2024-09-30 22:47:51.081042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:24.459 7.680 - 7.733: 99.9214% ( 2) 00:19:24.459 7.787 - 7.840: 99.9263% ( 1) 00:19:24.459 8.053 - 8.107: 99.9361% ( 2) 00:19:24.459 8.320 - 8.373: 99.9410% ( 1) 00:19:24.459 3822.933 - 3850.240: 99.9460% ( 1) 00:19:24.459 3986.773 - 4014.080: 100.0000% ( 11) 00:19:24.459 00:19:24.459 Complete histogram 00:19:24.459 ================== 00:19:24.459 Range in us Cumulative Count 00:19:24.459 1.627 - 1.633: 0.0049% ( 1) 00:19:24.459 1.633 - 1.640: 0.4176% ( 84) 00:19:24.459 1.640 - 1.647: 1.1790% ( 155) 00:19:24.459 1.647 - 1.653: 1.2232% ( 9) 00:19:24.459 1.653 - 1.660: 1.3067% ( 17) 00:19:24.459 1.660 - 1.667: 1.3853% ( 16) 00:19:24.459 1.667 - 1.673: 1.4246% ( 8) 00:19:24.459 1.673 - 1.680: 1.7636% ( 69) 00:19:24.459 1.680 - 1.687: 26.8619% ( 5109) 00:19:24.459 1.687 - 1.693: 52.9426% ( 5309) 00:19:24.459 1.693 - 1.700: 61.7803% ( 1799) 00:19:24.459 1.700 - 1.707: 73.3346% ( 2352) 00:19:24.459 1.707 - 1.720: 81.4698% ( 1656) 00:19:24.459 1.720 - 1.733: 83.5724% ( 428) 00:19:24.459 1.733 - 1.747: 84.9676% ( 284) 00:19:24.459 1.747 - 1.760: 90.2830% ( 1082) 00:19:24.459 1.760 - 1.773: 95.8194% ( 1127) 00:19:24.459 1.773 - 1.787: 98.4132% ( 528) 00:19:24.459 1.787 - 1.800: 99.2189% ( 164) 00:19:24.459 1.800 - 1.813: 99.3368% ( 24) 00:19:24.459 1.813 - 1.827: 99.3712% ( 7) 00:19:24.459 1.853 - 1.867: 99.3761% ( 1) 00:19:24.459 1.920 - 1.933: 99.3810% ( 1) 00:19:24.459 2.027 - 2.040: 99.3859% ( 1) 00:19:24.459 2.093 - 2.107: 99.3908% ( 1) 00:19:24.459 2.107 - 2.120: 99.3958% ( 1) 00:19:24.459 2.173 - 2.187: 99.4007% ( 1) 00:19:24.459 2.267 - 2.280: 99.4056% ( 1) 00:19:24.459 4.160 - 4.187: 99.4105% ( 1) 00:19:24.459 4.213 - 4.240: 99.4154% ( 1) 00:19:24.459 4.240 - 4.267: 99.4203% ( 1) 00:19:24.459 4.267 - 4.293: 99.4252% ( 1) 00:19:24.459 4.533 - 4.560: 99.4301% ( 1) 00:19:24.459 4.693 - 4.720: 99.4400% ( 2) 00:19:24.459 4.720 - 4.747: 99.4449% ( 1) 00:19:24.459 4.747 - 4.773: 99.4498% ( 1) 00:19:24.459 4.773 - 4.800: 99.4547% ( 1) 00:19:24.459 4.800 - 4.827: 99.4596% ( 1) 00:19:24.459 4.933 - 4.960: 99.4694% ( 2) 00:19:24.459 4.960 - 4.987: 99.4793% ( 2) 00:19:24.459 5.067 - 5.093: 99.4842% ( 1) 00:19:24.459 5.093 - 5.120: 99.4891% ( 1) 00:19:24.459 5.147 - 5.173: 99.4989% ( 2) 00:19:24.459 5.200 - 5.227: 99.5087% ( 2) 00:19:24.459 5.227 - 5.253: 99.5137% ( 1) 00:19:24.459 5.280 - 5.307: 99.5186% ( 1) 00:19:24.460 5.307 - 5.333: 99.5235% ( 1) 00:19:24.460 5.520 - 5.547: 99.5333% ( 2) 00:19:24.460 5.547 - 5.573: 99.5382% ( 1) 00:19:24.460 5.573 - 5.600: 99.5431% ( 1) 00:19:24.460 5.680 - 5.707: 99.5480% ( 1) 00:19:24.460 5.707 - 5.733: 99.5530% ( 1) 00:19:24.460 5.760 - 5.787: 99.5579% ( 1) 00:19:24.460 5.893 - 5.920: 99.5628% ( 1) 00:19:24.460 6.000 - 6.027: 99.5677% ( 1) 00:19:24.460 6.053 - 6.080: 99.5726% ( 1) 00:19:24.460 6.107 - 6.133: 99.5873% ( 3) 00:19:24.460 6.240 - 6.267: 99.5923% ( 1) 00:19:24.460 6.293 - 6.320: 99.5972% ( 1) 00:19:24.460 6.427 - 6.453: 99.6021% ( 1) 00:19:24.460 6.480 - 6.507: 99.6070% ( 1) 00:19:24.460 6.587 - 6.613: 99.6119% ( 1) 00:19:24.460 6.827 - 6.880: 99.6168% ( 1) 00:19:24.460 13.440 - 13.493: 99.6217% ( 1) 00:19:24.460 14.720 - 14.827: 99.6266% ( 1) 00:19:24.460 1153.707 - 1160.533: 99.6316% ( 1) 00:19:24.460 3986.773 - 4014.080: 100.0000% ( 75) 00:19:24.460 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:24.460 [ 00:19:24.460 { 00:19:24.460 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:24.460 "subtype": "Discovery", 00:19:24.460 "listen_addresses": [], 00:19:24.460 "allow_any_host": true, 00:19:24.460 "hosts": [] 00:19:24.460 }, 00:19:24.460 { 00:19:24.460 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:24.460 "subtype": "NVMe", 00:19:24.460 "listen_addresses": [ 00:19:24.460 { 00:19:24.460 "trtype": "VFIOUSER", 00:19:24.460 "adrfam": "IPv4", 00:19:24.460 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:24.460 "trsvcid": "0" 00:19:24.460 } 00:19:24.460 ], 00:19:24.460 "allow_any_host": true, 00:19:24.460 "hosts": [], 00:19:24.460 "serial_number": "SPDK1", 00:19:24.460 "model_number": "SPDK bdev Controller", 00:19:24.460 "max_namespaces": 32, 00:19:24.460 "min_cntlid": 1, 00:19:24.460 "max_cntlid": 65519, 00:19:24.460 "namespaces": [ 00:19:24.460 { 00:19:24.460 "nsid": 1, 00:19:24.460 "bdev_name": "Malloc1", 00:19:24.460 "name": "Malloc1", 00:19:24.460 "nguid": "CC07D41D6D2B42F7B40C5866AAFC4B73", 00:19:24.460 "uuid": "cc07d41d-6d2b-42f7-b40c-5866aafc4b73" 00:19:24.460 } 00:19:24.460 ] 00:19:24.460 }, 00:19:24.460 { 00:19:24.460 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:24.460 "subtype": "NVMe", 00:19:24.460 "listen_addresses": [ 00:19:24.460 { 00:19:24.460 "trtype": "VFIOUSER", 00:19:24.460 "adrfam": "IPv4", 00:19:24.460 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:24.460 "trsvcid": "0" 00:19:24.460 } 00:19:24.460 ], 00:19:24.460 "allow_any_host": true, 00:19:24.460 "hosts": [], 00:19:24.460 "serial_number": "SPDK2", 00:19:24.460 "model_number": "SPDK bdev Controller", 00:19:24.460 "max_namespaces": 32, 00:19:24.460 "min_cntlid": 1, 00:19:24.460 "max_cntlid": 65519, 00:19:24.460 "namespaces": [ 00:19:24.460 { 00:19:24.460 "nsid": 1, 00:19:24.460 "bdev_name": "Malloc2", 00:19:24.460 "name": "Malloc2", 00:19:24.460 "nguid": "98F834F4373E49ED84459EB1619A399F", 00:19:24.460 "uuid": "98f834f4-373e-49ed-8445-9eb1619a399f" 00:19:24.460 } 00:19:24.460 ] 00:19:24.460 } 00:19:24.460 ] 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=665522 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:24.460 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:24.460 [2024-09-30 22:47:51.438274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:24.720 Malloc3 00:19:24.720 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:24.720 [2024-09-30 22:47:51.650803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:24.720 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:24.720 Asynchronous Event Request test 00:19:24.720 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.720 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.720 Registering asynchronous event callbacks... 00:19:24.720 Starting namespace attribute notice tests for all controllers... 00:19:24.720 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:24.720 aer_cb - Changed Namespace 00:19:24.720 Cleaning up... 00:19:24.982 [ 00:19:24.982 { 00:19:24.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:24.982 "subtype": "Discovery", 00:19:24.982 "listen_addresses": [], 00:19:24.982 "allow_any_host": true, 00:19:24.982 "hosts": [] 00:19:24.982 }, 00:19:24.982 { 00:19:24.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:24.982 "subtype": "NVMe", 00:19:24.982 "listen_addresses": [ 00:19:24.982 { 00:19:24.982 "trtype": "VFIOUSER", 00:19:24.982 "adrfam": "IPv4", 00:19:24.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:24.982 "trsvcid": "0" 00:19:24.982 } 00:19:24.982 ], 00:19:24.982 "allow_any_host": true, 00:19:24.982 "hosts": [], 00:19:24.982 "serial_number": "SPDK1", 00:19:24.982 "model_number": "SPDK bdev Controller", 00:19:24.982 "max_namespaces": 32, 00:19:24.982 "min_cntlid": 1, 00:19:24.982 "max_cntlid": 65519, 00:19:24.982 "namespaces": [ 00:19:24.982 { 00:19:24.982 "nsid": 1, 00:19:24.982 "bdev_name": "Malloc1", 00:19:24.982 "name": "Malloc1", 00:19:24.982 "nguid": "CC07D41D6D2B42F7B40C5866AAFC4B73", 00:19:24.982 "uuid": "cc07d41d-6d2b-42f7-b40c-5866aafc4b73" 00:19:24.982 }, 00:19:24.982 { 00:19:24.982 "nsid": 2, 00:19:24.982 "bdev_name": "Malloc3", 00:19:24.982 "name": "Malloc3", 00:19:24.982 "nguid": "1A46EF6B28284149A26AF988BE684A93", 00:19:24.982 "uuid": "1a46ef6b-2828-4149-a26a-f988be684a93" 00:19:24.982 } 00:19:24.982 ] 00:19:24.982 }, 00:19:24.982 { 00:19:24.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:24.982 "subtype": "NVMe", 00:19:24.982 "listen_addresses": [ 00:19:24.982 { 00:19:24.982 "trtype": "VFIOUSER", 00:19:24.982 "adrfam": "IPv4", 00:19:24.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:24.982 "trsvcid": "0" 00:19:24.982 } 00:19:24.982 ], 00:19:24.982 "allow_any_host": true, 00:19:24.982 "hosts": [], 00:19:24.982 "serial_number": "SPDK2", 00:19:24.982 "model_number": "SPDK bdev Controller", 00:19:24.982 "max_namespaces": 32, 00:19:24.982 "min_cntlid": 1, 00:19:24.982 "max_cntlid": 65519, 00:19:24.982 "namespaces": [ 00:19:24.982 { 00:19:24.982 "nsid": 1, 00:19:24.982 "bdev_name": "Malloc2", 00:19:24.982 "name": "Malloc2", 00:19:24.982 "nguid": "98F834F4373E49ED84459EB1619A399F", 00:19:24.982 "uuid": "98f834f4-373e-49ed-8445-9eb1619a399f" 00:19:24.982 } 00:19:24.982 ] 00:19:24.982 } 00:19:24.982 ] 00:19:24.982 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 665522 00:19:24.982 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:24.982 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:24.982 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:24.982 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:24.982 [2024-09-30 22:47:51.887922] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:19:24.982 [2024-09-30 22:47:51.887964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665535 ] 00:19:24.982 [2024-09-30 22:47:51.913981] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:24.982 [2024-09-30 22:47:51.919308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:24.982 [2024-09-30 22:47:51.919327] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2baaa51000 00:19:24.982 [2024-09-30 22:47:51.920309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.921311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.922315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.923325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.924328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.925334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.926341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.927345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:24.982 [2024-09-30 22:47:51.928352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:24.982 [2024-09-30 22:47:51.928360] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2baaa46000 00:19:24.982 [2024-09-30 22:47:51.929272] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:24.982 [2024-09-30 22:47:51.938652] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:24.982 [2024-09-30 22:47:51.938675] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:24.982 [2024-09-30 22:47:51.943751] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:24.982 [2024-09-30 22:47:51.943783] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:24.982 [2024-09-30 22:47:51.943843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:24.982 [2024-09-30 22:47:51.943855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:24.982 [2024-09-30 22:47:51.943859] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:24.982 [2024-09-30 22:47:51.944751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:24.982 [2024-09-30 22:47:51.944759] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:24.982 [2024-09-30 22:47:51.944764] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:24.982 [2024-09-30 22:47:51.945761] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:24.982 [2024-09-30 22:47:51.945768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:24.982 [2024-09-30 22:47:51.945774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.946763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:24.982 [2024-09-30 22:47:51.946770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.947768] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:24.982 [2024-09-30 22:47:51.947775] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:24.982 [2024-09-30 22:47:51.947778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.947783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.947888] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:24.982 [2024-09-30 22:47:51.947891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.947899] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:24.982 [2024-09-30 22:47:51.948774] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:24.982 [2024-09-30 22:47:51.949782] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:24.982 [2024-09-30 22:47:51.950787] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:24.982 [2024-09-30 22:47:51.951790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:24.982 [2024-09-30 22:47:51.951822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:24.982 [2024-09-30 22:47:51.952803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:24.982 [2024-09-30 22:47:51.952809] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:24.982 [2024-09-30 22:47:51.952813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:24.982 [2024-09-30 22:47:51.952828] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:24.982 [2024-09-30 22:47:51.952833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:24.982 [2024-09-30 22:47:51.952843] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:24.982 [2024-09-30 22:47:51.952846] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:24.983 [2024-09-30 22:47:51.952849] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:24.983 [2024-09-30 22:47:51.952858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.956902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.956911] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:24.983 [2024-09-30 22:47:51.956917] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:24.983 [2024-09-30 22:47:51.956920] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:24.983 [2024-09-30 22:47:51.956924] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:24.983 [2024-09-30 22:47:51.956927] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:24.983 [2024-09-30 22:47:51.956930] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:24.983 [2024-09-30 22:47:51.956934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.956940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.956947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.964901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.964911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.983 [2024-09-30 22:47:51.964918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.983 [2024-09-30 22:47:51.964924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.983 [2024-09-30 22:47:51.964930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.983 [2024-09-30 22:47:51.964934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.964941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.964947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.972902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.972910] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:24.983 [2024-09-30 22:47:51.972914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.972919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.972924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.972931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.980900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.980947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.980953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.980960] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:24.983 [2024-09-30 22:47:51.980964] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:24.983 [2024-09-30 22:47:51.980966] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:24.983 [2024-09-30 22:47:51.980971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.988899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.988909] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:24.983 [2024-09-30 22:47:51.988918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.988924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.988929] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:24.983 [2024-09-30 22:47:51.988932] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:24.983 [2024-09-30 22:47:51.988935] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:24.983 [2024-09-30 22:47:51.988939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:24.983 [2024-09-30 22:47:51.996899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:24.983 [2024-09-30 22:47:51.996911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.996917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:24.983 [2024-09-30 22:47:51.996922] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:24.983 [2024-09-30 22:47:51.996925] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:24.983 [2024-09-30 22:47:51.996928] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:24.983 [2024-09-30 22:47:51.996932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.004900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.004909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004921] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004940] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:25.246 [2024-09-30 22:47:52.004943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:25.246 [2024-09-30 22:47:52.004947] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:25.246 [2024-09-30 22:47:52.004960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.012899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.012910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.020899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.020910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.028897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.028907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.036900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.036912] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:25.246 [2024-09-30 22:47:52.036916] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:25.246 [2024-09-30 22:47:52.036918] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:25.246 [2024-09-30 22:47:52.036921] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:25.246 [2024-09-30 22:47:52.036923] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:25.246 [2024-09-30 22:47:52.036928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:25.246 [2024-09-30 22:47:52.036933] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:25.246 [2024-09-30 22:47:52.036936] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:25.246 [2024-09-30 22:47:52.036939] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:25.246 [2024-09-30 22:47:52.036943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.036948] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:25.246 [2024-09-30 22:47:52.036951] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:25.246 [2024-09-30 22:47:52.036954] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:25.246 [2024-09-30 22:47:52.036958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.036963] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:25.246 [2024-09-30 22:47:52.036966] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:25.246 [2024-09-30 22:47:52.036969] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:25.246 [2024-09-30 22:47:52.036975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:25.246 [2024-09-30 22:47:52.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.044912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:25.246 [2024-09-30 22:47:52.044924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:25.246 ===================================================== 00:19:25.246 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:25.246 ===================================================== 00:19:25.246 Controller Capabilities/Features 00:19:25.246 ================================ 00:19:25.246 Vendor ID: 4e58 00:19:25.246 Subsystem Vendor ID: 4e58 00:19:25.246 Serial Number: SPDK2 00:19:25.246 Model Number: SPDK bdev Controller 00:19:25.246 Firmware Version: 25.01 00:19:25.246 Recommended Arb Burst: 6 00:19:25.246 IEEE OUI Identifier: 8d 6b 50 00:19:25.246 Multi-path I/O 00:19:25.246 May have multiple subsystem ports: Yes 00:19:25.246 May have multiple controllers: Yes 00:19:25.246 Associated with SR-IOV VF: No 00:19:25.246 Max Data Transfer Size: 131072 00:19:25.246 Max Number of Namespaces: 32 00:19:25.246 Max Number of I/O Queues: 127 00:19:25.246 NVMe Specification Version (VS): 1.3 00:19:25.246 NVMe Specification Version (Identify): 1.3 00:19:25.246 Maximum Queue Entries: 256 00:19:25.246 Contiguous Queues Required: Yes 00:19:25.246 Arbitration Mechanisms Supported 00:19:25.246 Weighted Round Robin: Not Supported 00:19:25.246 Vendor Specific: Not Supported 00:19:25.246 Reset Timeout: 15000 ms 00:19:25.246 Doorbell Stride: 4 bytes 00:19:25.246 NVM Subsystem Reset: Not Supported 00:19:25.246 Command Sets Supported 00:19:25.246 NVM Command Set: Supported 00:19:25.246 Boot Partition: Not Supported 00:19:25.246 Memory Page Size Minimum: 4096 bytes 00:19:25.246 Memory Page Size Maximum: 4096 bytes 00:19:25.246 Persistent Memory Region: Not Supported 00:19:25.246 Optional Asynchronous Events Supported 00:19:25.246 Namespace Attribute Notices: Supported 00:19:25.246 Firmware Activation Notices: Not Supported 00:19:25.246 ANA Change Notices: Not Supported 00:19:25.246 PLE Aggregate Log Change Notices: Not Supported 00:19:25.246 LBA Status Info Alert Notices: Not Supported 00:19:25.246 EGE Aggregate Log Change Notices: Not Supported 00:19:25.246 Normal NVM Subsystem Shutdown event: Not Supported 00:19:25.246 Zone Descriptor Change Notices: Not Supported 00:19:25.246 Discovery Log Change Notices: Not Supported 00:19:25.246 Controller Attributes 00:19:25.246 128-bit Host Identifier: Supported 00:19:25.246 Non-Operational Permissive Mode: Not Supported 00:19:25.246 NVM Sets: Not Supported 00:19:25.246 Read Recovery Levels: Not Supported 00:19:25.246 Endurance Groups: Not Supported 00:19:25.246 Predictable Latency Mode: Not Supported 00:19:25.246 Traffic Based Keep ALive: Not Supported 00:19:25.246 Namespace Granularity: Not Supported 00:19:25.246 SQ Associations: Not Supported 00:19:25.246 UUID List: Not Supported 00:19:25.246 Multi-Domain Subsystem: Not Supported 00:19:25.246 Fixed Capacity Management: Not Supported 00:19:25.246 Variable Capacity Management: Not Supported 00:19:25.246 Delete Endurance Group: Not Supported 00:19:25.246 Delete NVM Set: Not Supported 00:19:25.246 Extended LBA Formats Supported: Not Supported 00:19:25.246 Flexible Data Placement Supported: Not Supported 00:19:25.246 00:19:25.246 Controller Memory Buffer Support 00:19:25.246 ================================ 00:19:25.246 Supported: No 00:19:25.246 00:19:25.246 Persistent Memory Region Support 00:19:25.246 ================================ 00:19:25.246 Supported: No 00:19:25.246 00:19:25.246 Admin Command Set Attributes 00:19:25.246 ============================ 00:19:25.247 Security Send/Receive: Not Supported 00:19:25.247 Format NVM: Not Supported 00:19:25.247 Firmware Activate/Download: Not Supported 00:19:25.247 Namespace Management: Not Supported 00:19:25.247 Device Self-Test: Not Supported 00:19:25.247 Directives: Not Supported 00:19:25.247 NVMe-MI: Not Supported 00:19:25.247 Virtualization Management: Not Supported 00:19:25.247 Doorbell Buffer Config: Not Supported 00:19:25.247 Get LBA Status Capability: Not Supported 00:19:25.247 Command & Feature Lockdown Capability: Not Supported 00:19:25.247 Abort Command Limit: 4 00:19:25.247 Async Event Request Limit: 4 00:19:25.247 Number of Firmware Slots: N/A 00:19:25.247 Firmware Slot 1 Read-Only: N/A 00:19:25.247 Firmware Activation Without Reset: N/A 00:19:25.247 Multiple Update Detection Support: N/A 00:19:25.247 Firmware Update Granularity: No Information Provided 00:19:25.247 Per-Namespace SMART Log: No 00:19:25.247 Asymmetric Namespace Access Log Page: Not Supported 00:19:25.247 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:25.247 Command Effects Log Page: Supported 00:19:25.247 Get Log Page Extended Data: Supported 00:19:25.247 Telemetry Log Pages: Not Supported 00:19:25.247 Persistent Event Log Pages: Not Supported 00:19:25.247 Supported Log Pages Log Page: May Support 00:19:25.247 Commands Supported & Effects Log Page: Not Supported 00:19:25.247 Feature Identifiers & Effects Log Page:May Support 00:19:25.247 NVMe-MI Commands & Effects Log Page: May Support 00:19:25.247 Data Area 4 for Telemetry Log: Not Supported 00:19:25.247 Error Log Page Entries Supported: 128 00:19:25.247 Keep Alive: Supported 00:19:25.247 Keep Alive Granularity: 10000 ms 00:19:25.247 00:19:25.247 NVM Command Set Attributes 00:19:25.247 ========================== 00:19:25.247 Submission Queue Entry Size 00:19:25.247 Max: 64 00:19:25.247 Min: 64 00:19:25.247 Completion Queue Entry Size 00:19:25.247 Max: 16 00:19:25.247 Min: 16 00:19:25.247 Number of Namespaces: 32 00:19:25.247 Compare Command: Supported 00:19:25.247 Write Uncorrectable Command: Not Supported 00:19:25.247 Dataset Management Command: Supported 00:19:25.247 Write Zeroes Command: Supported 00:19:25.247 Set Features Save Field: Not Supported 00:19:25.247 Reservations: Not Supported 00:19:25.247 Timestamp: Not Supported 00:19:25.247 Copy: Supported 00:19:25.247 Volatile Write Cache: Present 00:19:25.247 Atomic Write Unit (Normal): 1 00:19:25.247 Atomic Write Unit (PFail): 1 00:19:25.247 Atomic Compare & Write Unit: 1 00:19:25.247 Fused Compare & Write: Supported 00:19:25.247 Scatter-Gather List 00:19:25.247 SGL Command Set: Supported (Dword aligned) 00:19:25.247 SGL Keyed: Not Supported 00:19:25.247 SGL Bit Bucket Descriptor: Not Supported 00:19:25.247 SGL Metadata Pointer: Not Supported 00:19:25.247 Oversized SGL: Not Supported 00:19:25.247 SGL Metadata Address: Not Supported 00:19:25.247 SGL Offset: Not Supported 00:19:25.247 Transport SGL Data Block: Not Supported 00:19:25.247 Replay Protected Memory Block: Not Supported 00:19:25.247 00:19:25.247 Firmware Slot Information 00:19:25.247 ========================= 00:19:25.247 Active slot: 1 00:19:25.247 Slot 1 Firmware Revision: 25.01 00:19:25.247 00:19:25.247 00:19:25.247 Commands Supported and Effects 00:19:25.247 ============================== 00:19:25.247 Admin Commands 00:19:25.247 -------------- 00:19:25.247 Get Log Page (02h): Supported 00:19:25.247 Identify (06h): Supported 00:19:25.247 Abort (08h): Supported 00:19:25.247 Set Features (09h): Supported 00:19:25.247 Get Features (0Ah): Supported 00:19:25.247 Asynchronous Event Request (0Ch): Supported 00:19:25.247 Keep Alive (18h): Supported 00:19:25.247 I/O Commands 00:19:25.247 ------------ 00:19:25.247 Flush (00h): Supported LBA-Change 00:19:25.247 Write (01h): Supported LBA-Change 00:19:25.247 Read (02h): Supported 00:19:25.247 Compare (05h): Supported 00:19:25.247 Write Zeroes (08h): Supported LBA-Change 00:19:25.247 Dataset Management (09h): Supported LBA-Change 00:19:25.247 Copy (19h): Supported LBA-Change 00:19:25.247 00:19:25.247 Error Log 00:19:25.247 ========= 00:19:25.247 00:19:25.247 Arbitration 00:19:25.247 =========== 00:19:25.247 Arbitration Burst: 1 00:19:25.247 00:19:25.247 Power Management 00:19:25.247 ================ 00:19:25.247 Number of Power States: 1 00:19:25.247 Current Power State: Power State #0 00:19:25.247 Power State #0: 00:19:25.247 Max Power: 0.00 W 00:19:25.247 Non-Operational State: Operational 00:19:25.247 Entry Latency: Not Reported 00:19:25.247 Exit Latency: Not Reported 00:19:25.247 Relative Read Throughput: 0 00:19:25.247 Relative Read Latency: 0 00:19:25.247 Relative Write Throughput: 0 00:19:25.247 Relative Write Latency: 0 00:19:25.247 Idle Power: Not Reported 00:19:25.247 Active Power: Not Reported 00:19:25.247 Non-Operational Permissive Mode: Not Supported 00:19:25.247 00:19:25.247 Health Information 00:19:25.247 ================== 00:19:25.247 Critical Warnings: 00:19:25.247 Available Spare Space: OK 00:19:25.247 Temperature: OK 00:19:25.247 Device Reliability: OK 00:19:25.247 Read Only: No 00:19:25.247 Volatile Memory Backup: OK 00:19:25.247 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:25.247 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:25.247 Available Spare: 0% 00:19:25.247 Available Sp[2024-09-30 22:47:52.044997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:25.247 [2024-09-30 22:47:52.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:25.247 [2024-09-30 22:47:52.052923] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:25.247 [2024-09-30 22:47:52.052930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.247 [2024-09-30 22:47:52.052935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.247 [2024-09-30 22:47:52.052940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.247 [2024-09-30 22:47:52.052945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.247 [2024-09-30 22:47:52.056899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:25.247 [2024-09-30 22:47:52.056908] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:25.247 [2024-09-30 22:47:52.057002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:25.247 [2024-09-30 22:47:52.057038] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:25.247 [2024-09-30 22:47:52.057043] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:25.247 [2024-09-30 22:47:52.058002] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:25.247 [2024-09-30 22:47:52.058011] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:25.247 [2024-09-30 22:47:52.058058] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:25.247 [2024-09-30 22:47:52.059018] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:25.247 are Threshold: 0% 00:19:25.247 Life Percentage Used: 0% 00:19:25.247 Data Units Read: 0 00:19:25.247 Data Units Written: 0 00:19:25.247 Host Read Commands: 0 00:19:25.247 Host Write Commands: 0 00:19:25.247 Controller Busy Time: 0 minutes 00:19:25.247 Power Cycles: 0 00:19:25.247 Power On Hours: 0 hours 00:19:25.247 Unsafe Shutdowns: 0 00:19:25.247 Unrecoverable Media Errors: 0 00:19:25.247 Lifetime Error Log Entries: 0 00:19:25.247 Warning Temperature Time: 0 minutes 00:19:25.247 Critical Temperature Time: 0 minutes 00:19:25.247 00:19:25.247 Number of Queues 00:19:25.247 ================ 00:19:25.247 Number of I/O Submission Queues: 127 00:19:25.247 Number of I/O Completion Queues: 127 00:19:25.247 00:19:25.247 Active Namespaces 00:19:25.247 ================= 00:19:25.247 Namespace ID:1 00:19:25.247 Error Recovery Timeout: Unlimited 00:19:25.247 Command Set Identifier: NVM (00h) 00:19:25.247 Deallocate: Supported 00:19:25.247 Deallocated/Unwritten Error: Not Supported 00:19:25.247 Deallocated Read Value: Unknown 00:19:25.247 Deallocate in Write Zeroes: Not Supported 00:19:25.247 Deallocated Guard Field: 0xFFFF 00:19:25.247 Flush: Supported 00:19:25.247 Reservation: Supported 00:19:25.247 Namespace Sharing Capabilities: Multiple Controllers 00:19:25.247 Size (in LBAs): 131072 (0GiB) 00:19:25.247 Capacity (in LBAs): 131072 (0GiB) 00:19:25.247 Utilization (in LBAs): 131072 (0GiB) 00:19:25.247 NGUID: 98F834F4373E49ED84459EB1619A399F 00:19:25.247 UUID: 98f834f4-373e-49ed-8445-9eb1619a399f 00:19:25.247 Thin Provisioning: Not Supported 00:19:25.248 Per-NS Atomic Units: Yes 00:19:25.248 Atomic Boundary Size (Normal): 0 00:19:25.248 Atomic Boundary Size (PFail): 0 00:19:25.248 Atomic Boundary Offset: 0 00:19:25.248 Maximum Single Source Range Length: 65535 00:19:25.248 Maximum Copy Length: 65535 00:19:25.248 Maximum Source Range Count: 1 00:19:25.248 NGUID/EUI64 Never Reused: No 00:19:25.248 Namespace Write Protected: No 00:19:25.248 Number of LBA Formats: 1 00:19:25.248 Current LBA Format: LBA Format #00 00:19:25.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:25.248 00:19:25.248 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:25.248 [2024-09-30 22:47:52.234266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:30.542 Initializing NVMe Controllers 00:19:30.542 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:30.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:30.543 Initialization complete. Launching workers. 00:19:30.543 ======================================================== 00:19:30.543 Latency(us) 00:19:30.543 Device Information : IOPS MiB/s Average min max 00:19:30.543 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39998.20 156.24 3200.22 844.65 7792.63 00:19:30.543 ======================================================== 00:19:30.543 Total : 39998.20 156.24 3200.22 844.65 7792.63 00:19:30.543 00:19:30.543 [2024-09-30 22:47:57.345080] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:30.543 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:30.543 [2024-09-30 22:47:57.524636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:35.836 Initializing NVMe Controllers 00:19:35.836 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:35.837 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:35.837 Initialization complete. Launching workers. 00:19:35.837 ======================================================== 00:19:35.837 Latency(us) 00:19:35.837 Device Information : IOPS MiB/s Average min max 00:19:35.837 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40040.00 156.41 3197.36 849.27 7769.47 00:19:35.837 ======================================================== 00:19:35.837 Total : 40040.00 156.41 3197.36 849.27 7769.47 00:19:35.837 00:19:35.837 [2024-09-30 22:48:02.545797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:35.837 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:35.837 [2024-09-30 22:48:02.737971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:41.153 [2024-09-30 22:48:07.876987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:41.153 Initializing NVMe Controllers 00:19:41.153 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:41.153 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:41.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:41.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:41.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:41.153 Initialization complete. Launching workers. 00:19:41.153 Starting thread on core 2 00:19:41.153 Starting thread on core 3 00:19:41.153 Starting thread on core 1 00:19:41.153 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:41.153 [2024-09-30 22:48:08.120293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:44.456 [2024-09-30 22:48:11.175541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:44.456 Initializing NVMe Controllers 00:19:44.456 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.456 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.456 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:44.456 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:44.456 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:44.456 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:44.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:44.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:44.456 Initialization complete. Launching workers. 00:19:44.456 Starting thread on core 1 with urgent priority queue 00:19:44.456 Starting thread on core 2 with urgent priority queue 00:19:44.456 Starting thread on core 3 with urgent priority queue 00:19:44.456 Starting thread on core 0 with urgent priority queue 00:19:44.456 SPDK bdev Controller (SPDK2 ) core 0: 16225.67 IO/s 6.16 secs/100000 ios 00:19:44.456 SPDK bdev Controller (SPDK2 ) core 1: 11204.33 IO/s 8.93 secs/100000 ios 00:19:44.456 SPDK bdev Controller (SPDK2 ) core 2: 13992.33 IO/s 7.15 secs/100000 ios 00:19:44.456 SPDK bdev Controller (SPDK2 ) core 3: 9113.67 IO/s 10.97 secs/100000 ios 00:19:44.456 ======================================================== 00:19:44.456 00:19:44.457 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:44.457 [2024-09-30 22:48:11.399287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:44.457 Initializing NVMe Controllers 00:19:44.457 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.457 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.457 Namespace ID: 1 size: 0GB 00:19:44.457 Initialization complete. 00:19:44.457 INFO: using host memory buffer for IO 00:19:44.457 Hello world! 00:19:44.457 [2024-09-30 22:48:11.412389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:44.457 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:44.717 [2024-09-30 22:48:11.629874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:46.101 Initializing NVMe Controllers 00:19:46.101 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.101 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.101 Initialization complete. Launching workers. 00:19:46.101 submit (in ns) avg, min, max = 5696.1, 2821.7, 3999266.7 00:19:46.101 complete (in ns) avg, min, max = 17279.8, 1629.2, 3999810.8 00:19:46.101 00:19:46.101 Submit histogram 00:19:46.101 ================ 00:19:46.101 Range in us Cumulative Count 00:19:46.101 2.813 - 2.827: 0.1462% ( 30) 00:19:46.101 2.827 - 2.840: 0.9014% ( 155) 00:19:46.101 2.840 - 2.853: 3.1477% ( 461) 00:19:46.101 2.853 - 2.867: 6.0761% ( 601) 00:19:46.101 2.867 - 2.880: 9.8377% ( 772) 00:19:46.101 2.880 - 2.893: 14.3887% ( 934) 00:19:46.101 2.893 - 2.907: 20.0945% ( 1171) 00:19:46.101 2.907 - 2.920: 26.9503% ( 1407) 00:19:46.101 2.920 - 2.933: 34.1714% ( 1482) 00:19:46.101 2.933 - 2.947: 40.4570% ( 1290) 00:19:46.101 2.947 - 2.960: 44.8667% ( 905) 00:19:46.101 2.960 - 2.973: 51.5617% ( 1374) 00:19:46.101 2.973 - 2.987: 61.0827% ( 1954) 00:19:46.101 2.987 - 3.000: 72.0314% ( 2247) 00:19:46.101 3.000 - 3.013: 80.6705% ( 1773) 00:19:46.101 3.013 - 3.027: 86.2057% ( 1136) 00:19:46.101 3.027 - 3.040: 91.0296% ( 990) 00:19:46.101 3.040 - 3.053: 95.1079% ( 837) 00:19:46.101 3.053 - 3.067: 97.5199% ( 495) 00:19:46.101 3.067 - 3.080: 98.5870% ( 219) 00:19:46.101 3.080 - 3.093: 99.2496% ( 136) 00:19:46.101 3.093 - 3.107: 99.5079% ( 53) 00:19:46.101 3.107 - 3.120: 99.5810% ( 15) 00:19:46.101 3.120 - 3.133: 99.5956% ( 3) 00:19:46.101 3.147 - 3.160: 99.6004% ( 1) 00:19:46.101 3.187 - 3.200: 99.6053% ( 1) 00:19:46.101 3.267 - 3.280: 99.6102% ( 1) 00:19:46.101 3.320 - 3.333: 99.6151% ( 1) 00:19:46.101 3.400 - 3.413: 99.6199% ( 1) 00:19:46.101 3.600 - 3.627: 99.6248% ( 1) 00:19:46.101 3.787 - 3.813: 99.6297% ( 1) 00:19:46.101 3.867 - 3.893: 99.6346% ( 1) 00:19:46.101 4.000 - 4.027: 99.6394% ( 1) 00:19:46.101 4.107 - 4.133: 99.6443% ( 1) 00:19:46.101 4.240 - 4.267: 99.6492% ( 1) 00:19:46.102 4.400 - 4.427: 99.6540% ( 1) 00:19:46.102 4.480 - 4.507: 99.6589% ( 1) 00:19:46.102 4.987 - 5.013: 99.6638% ( 1) 00:19:46.102 5.013 - 5.040: 99.6735% ( 2) 00:19:46.102 5.040 - 5.067: 99.6784% ( 1) 00:19:46.102 5.147 - 5.173: 99.6833% ( 1) 00:19:46.102 5.200 - 5.227: 99.6882% ( 1) 00:19:46.102 5.440 - 5.467: 99.6979% ( 2) 00:19:46.102 5.493 - 5.520: 99.7028% ( 1) 00:19:46.102 5.653 - 5.680: 99.7076% ( 1) 00:19:46.102 5.707 - 5.733: 99.7125% ( 1) 00:19:46.102 5.787 - 5.813: 99.7271% ( 3) 00:19:46.102 5.813 - 5.840: 99.7320% ( 1) 00:19:46.102 5.840 - 5.867: 99.7418% ( 2) 00:19:46.102 5.867 - 5.893: 99.7466% ( 1) 00:19:46.102 5.893 - 5.920: 99.7564% ( 2) 00:19:46.102 5.920 - 5.947: 99.7612% ( 1) 00:19:46.102 5.947 - 5.973: 99.7661% ( 1) 00:19:46.102 5.973 - 6.000: 99.7856% ( 4) 00:19:46.102 6.027 - 6.053: 99.7905% ( 1) 00:19:46.102 6.053 - 6.080: 99.8002% ( 2) 00:19:46.102 6.080 - 6.107: 99.8148% ( 3) 00:19:46.102 6.107 - 6.133: 99.8197% ( 1) 00:19:46.102 6.133 - 6.160: 99.8246% ( 1) 00:19:46.102 6.160 - 6.187: 99.8392% ( 3) 00:19:46.102 6.187 - 6.213: 99.8441% ( 1) 00:19:46.102 6.320 - 6.347: 99.8489% ( 1) 00:19:46.102 6.347 - 6.373: 99.8538% ( 1) 00:19:46.102 6.400 - 6.427: 99.8587% ( 1) 00:19:46.102 6.427 - 6.453: 99.8636% ( 1) 00:19:46.102 6.480 - 6.507: 99.8684% ( 1) 00:19:46.102 6.667 - 6.693: 99.8733% ( 1) 00:19:46.102 6.720 - 6.747: 99.8782% ( 1) 00:19:46.102 6.747 - 6.773: 99.8831% ( 1) 00:19:46.102 6.800 - 6.827: 99.8879% ( 1) 00:19:46.102 6.827 - 6.880: 99.8928% ( 1) 00:19:46.102 6.987 - 7.040: 99.8977% ( 1) 00:19:46.102 7.040 - 7.093: 99.9025% ( 1) 00:19:46.102 7.573 - 7.627: 99.9074% ( 1) 00:19:46.102 8.213 - 8.267: 99.9123% ( 1) 00:19:46.102 8.907 - 8.960: 99.9172% ( 1) 00:19:46.102 9.333 - 9.387: 99.9220% ( 1) 00:19:46.102 10.667 - 10.720: 99.9269% ( 1) 00:19:46.102 12.800 - 12.853: 99.9318% ( 1) 00:19:46.102 [2024-09-30 22:48:12.725415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:46.102 3986.773 - 4014.080: 100.0000% ( 14) 00:19:46.102 00:19:46.102 Complete histogram 00:19:46.102 ================== 00:19:46.102 Range in us Cumulative Count 00:19:46.102 1.627 - 1.633: 0.0049% ( 1) 00:19:46.102 1.633 - 1.640: 0.0244% ( 4) 00:19:46.102 1.640 - 1.647: 0.9745% ( 195) 00:19:46.102 1.647 - 1.653: 1.1061% ( 27) 00:19:46.102 1.653 - 1.660: 1.1402% ( 7) 00:19:46.102 1.660 - 1.667: 1.3107% ( 35) 00:19:46.102 1.667 - 1.673: 1.3546% ( 9) 00:19:46.102 1.673 - 1.680: 5.9689% ( 947) 00:19:46.102 1.680 - 1.687: 43.7412% ( 7752) 00:19:46.102 1.687 - 1.693: 51.6981% ( 1633) 00:19:46.102 1.693 - 1.700: 63.7090% ( 2465) 00:19:46.102 1.700 - 1.707: 72.7330% ( 1852) 00:19:46.102 1.707 - 1.720: 82.2833% ( 1960) 00:19:46.102 1.720 - 1.733: 83.9546% ( 343) 00:19:46.102 1.733 - 1.747: 86.3422% ( 490) 00:19:46.102 1.747 - 1.760: 91.1173% ( 980) 00:19:46.102 1.760 - 1.773: 96.0191% ( 1006) 00:19:46.102 1.773 - 1.787: 98.4067% ( 490) 00:19:46.102 1.787 - 1.800: 99.1765% ( 158) 00:19:46.102 1.800 - 1.813: 99.4007% ( 46) 00:19:46.102 1.813 - 1.827: 99.4299% ( 6) 00:19:46.102 1.853 - 1.867: 99.4348% ( 1) 00:19:46.102 2.160 - 2.173: 99.4397% ( 1) 00:19:46.102 3.867 - 3.893: 99.4445% ( 1) 00:19:46.102 3.920 - 3.947: 99.4494% ( 1) 00:19:46.102 3.947 - 3.973: 99.4543% ( 1) 00:19:46.102 4.027 - 4.053: 99.4591% ( 1) 00:19:46.102 4.080 - 4.107: 99.4640% ( 1) 00:19:46.102 4.267 - 4.293: 99.4689% ( 1) 00:19:46.102 4.347 - 4.373: 99.4738% ( 1) 00:19:46.102 4.400 - 4.427: 99.4786% ( 1) 00:19:46.102 4.507 - 4.533: 99.4835% ( 1) 00:19:46.102 4.533 - 4.560: 99.4884% ( 1) 00:19:46.102 4.613 - 4.640: 99.4933% ( 1) 00:19:46.102 4.693 - 4.720: 99.5030% ( 2) 00:19:46.102 4.827 - 4.853: 99.5079% ( 1) 00:19:46.102 4.880 - 4.907: 99.5127% ( 1) 00:19:46.102 4.987 - 5.013: 99.5176% ( 1) 00:19:46.102 5.040 - 5.067: 99.5225% ( 1) 00:19:46.102 5.200 - 5.227: 99.5274% ( 1) 00:19:46.102 5.253 - 5.280: 99.5322% ( 1) 00:19:46.102 5.280 - 5.307: 99.5371% ( 1) 00:19:46.102 5.333 - 5.360: 99.5420% ( 1) 00:19:46.102 5.360 - 5.387: 99.5468% ( 1) 00:19:46.102 5.413 - 5.440: 99.5566% ( 2) 00:19:46.102 5.547 - 5.573: 99.5615% ( 1) 00:19:46.102 5.627 - 5.653: 99.5663% ( 1) 00:19:46.102 5.787 - 5.813: 99.5712% ( 1) 00:19:46.102 5.840 - 5.867: 99.5761% ( 1) 00:19:46.102 6.080 - 6.107: 99.5810% ( 1) 00:19:46.102 6.107 - 6.133: 99.5858% ( 1) 00:19:46.102 6.587 - 6.613: 99.5907% ( 1) 00:19:46.102 8.107 - 8.160: 99.5956% ( 1) 00:19:46.102 8.853 - 8.907: 99.6004% ( 1) 00:19:46.102 10.560 - 10.613: 99.6053% ( 1) 00:19:46.102 13.280 - 13.333: 99.6102% ( 1) 00:19:46.102 3986.773 - 4014.080: 100.0000% ( 80) 00:19:46.102 00:19:46.102 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:46.102 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:46.102 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:46.102 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:46.102 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:46.102 [ 00:19:46.102 { 00:19:46.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.102 "subtype": "Discovery", 00:19:46.102 "listen_addresses": [], 00:19:46.102 "allow_any_host": true, 00:19:46.102 "hosts": [] 00:19:46.102 }, 00:19:46.102 { 00:19:46.102 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:46.102 "subtype": "NVMe", 00:19:46.102 "listen_addresses": [ 00:19:46.102 { 00:19:46.102 "trtype": "VFIOUSER", 00:19:46.102 "adrfam": "IPv4", 00:19:46.102 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:46.102 "trsvcid": "0" 00:19:46.102 } 00:19:46.102 ], 00:19:46.102 "allow_any_host": true, 00:19:46.102 "hosts": [], 00:19:46.102 "serial_number": "SPDK1", 00:19:46.102 "model_number": "SPDK bdev Controller", 00:19:46.102 "max_namespaces": 32, 00:19:46.102 "min_cntlid": 1, 00:19:46.102 "max_cntlid": 65519, 00:19:46.102 "namespaces": [ 00:19:46.102 { 00:19:46.102 "nsid": 1, 00:19:46.103 "bdev_name": "Malloc1", 00:19:46.103 "name": "Malloc1", 00:19:46.103 "nguid": "CC07D41D6D2B42F7B40C5866AAFC4B73", 00:19:46.103 "uuid": "cc07d41d-6d2b-42f7-b40c-5866aafc4b73" 00:19:46.103 }, 00:19:46.103 { 00:19:46.103 "nsid": 2, 00:19:46.103 "bdev_name": "Malloc3", 00:19:46.103 "name": "Malloc3", 00:19:46.103 "nguid": "1A46EF6B28284149A26AF988BE684A93", 00:19:46.103 "uuid": "1a46ef6b-2828-4149-a26a-f988be684a93" 00:19:46.103 } 00:19:46.103 ] 00:19:46.103 }, 00:19:46.103 { 00:19:46.103 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:46.103 "subtype": "NVMe", 00:19:46.103 "listen_addresses": [ 00:19:46.103 { 00:19:46.103 "trtype": "VFIOUSER", 00:19:46.103 "adrfam": "IPv4", 00:19:46.103 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:46.103 "trsvcid": "0" 00:19:46.103 } 00:19:46.103 ], 00:19:46.103 "allow_any_host": true, 00:19:46.103 "hosts": [], 00:19:46.103 "serial_number": "SPDK2", 00:19:46.103 "model_number": "SPDK bdev Controller", 00:19:46.103 "max_namespaces": 32, 00:19:46.103 "min_cntlid": 1, 00:19:46.103 "max_cntlid": 65519, 00:19:46.103 "namespaces": [ 00:19:46.103 { 00:19:46.103 "nsid": 1, 00:19:46.103 "bdev_name": "Malloc2", 00:19:46.103 "name": "Malloc2", 00:19:46.103 "nguid": "98F834F4373E49ED84459EB1619A399F", 00:19:46.103 "uuid": "98f834f4-373e-49ed-8445-9eb1619a399f" 00:19:46.103 } 00:19:46.103 ] 00:19:46.103 } 00:19:46.103 ] 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=670262 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:46.103 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:46.103 [2024-09-30 22:48:13.093151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:46.364 Malloc4 00:19:46.364 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:46.364 [2024-09-30 22:48:13.295523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:46.364 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:46.364 Asynchronous Event Request test 00:19:46.364 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.364 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.364 Registering asynchronous event callbacks... 00:19:46.364 Starting namespace attribute notice tests for all controllers... 00:19:46.364 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:46.364 aer_cb - Changed Namespace 00:19:46.364 Cleaning up... 00:19:46.624 [ 00:19:46.624 { 00:19:46.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.624 "subtype": "Discovery", 00:19:46.624 "listen_addresses": [], 00:19:46.624 "allow_any_host": true, 00:19:46.624 "hosts": [] 00:19:46.624 }, 00:19:46.624 { 00:19:46.624 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:46.624 "subtype": "NVMe", 00:19:46.624 "listen_addresses": [ 00:19:46.624 { 00:19:46.624 "trtype": "VFIOUSER", 00:19:46.624 "adrfam": "IPv4", 00:19:46.624 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:46.624 "trsvcid": "0" 00:19:46.624 } 00:19:46.624 ], 00:19:46.624 "allow_any_host": true, 00:19:46.624 "hosts": [], 00:19:46.624 "serial_number": "SPDK1", 00:19:46.624 "model_number": "SPDK bdev Controller", 00:19:46.624 "max_namespaces": 32, 00:19:46.624 "min_cntlid": 1, 00:19:46.624 "max_cntlid": 65519, 00:19:46.624 "namespaces": [ 00:19:46.624 { 00:19:46.624 "nsid": 1, 00:19:46.624 "bdev_name": "Malloc1", 00:19:46.624 "name": "Malloc1", 00:19:46.624 "nguid": "CC07D41D6D2B42F7B40C5866AAFC4B73", 00:19:46.624 "uuid": "cc07d41d-6d2b-42f7-b40c-5866aafc4b73" 00:19:46.624 }, 00:19:46.624 { 00:19:46.624 "nsid": 2, 00:19:46.624 "bdev_name": "Malloc3", 00:19:46.624 "name": "Malloc3", 00:19:46.624 "nguid": "1A46EF6B28284149A26AF988BE684A93", 00:19:46.624 "uuid": "1a46ef6b-2828-4149-a26a-f988be684a93" 00:19:46.624 } 00:19:46.624 ] 00:19:46.624 }, 00:19:46.624 { 00:19:46.624 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:46.624 "subtype": "NVMe", 00:19:46.624 "listen_addresses": [ 00:19:46.624 { 00:19:46.624 "trtype": "VFIOUSER", 00:19:46.624 "adrfam": "IPv4", 00:19:46.624 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:46.624 "trsvcid": "0" 00:19:46.624 } 00:19:46.624 ], 00:19:46.624 "allow_any_host": true, 00:19:46.624 "hosts": [], 00:19:46.624 "serial_number": "SPDK2", 00:19:46.624 "model_number": "SPDK bdev Controller", 00:19:46.624 "max_namespaces": 32, 00:19:46.624 "min_cntlid": 1, 00:19:46.624 "max_cntlid": 65519, 00:19:46.624 "namespaces": [ 00:19:46.624 { 00:19:46.624 "nsid": 1, 00:19:46.624 "bdev_name": "Malloc2", 00:19:46.624 "name": "Malloc2", 00:19:46.624 "nguid": "98F834F4373E49ED84459EB1619A399F", 00:19:46.624 "uuid": "98f834f4-373e-49ed-8445-9eb1619a399f" 00:19:46.624 }, 00:19:46.624 { 00:19:46.624 "nsid": 2, 00:19:46.624 "bdev_name": "Malloc4", 00:19:46.624 "name": "Malloc4", 00:19:46.624 "nguid": "E78ECE4324964951883949AFFAA2D1AF", 00:19:46.624 "uuid": "e78ece43-2496-4951-8839-49affaa2d1af" 00:19:46.624 } 00:19:46.624 ] 00:19:46.624 } 00:19:46.624 ] 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 670262 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 660804 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 660804 ']' 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 660804 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 660804 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:46.624 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 660804' 00:19:46.625 killing process with pid 660804 00:19:46.625 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 660804 00:19:46.625 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 660804 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=670450 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 670450' 00:19:46.886 Process pid: 670450 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 670450 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 670450 ']' 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.886 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:46.886 [2024-09-30 22:48:13.798012] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:46.886 [2024-09-30 22:48:13.799000] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:19:46.886 [2024-09-30 22:48:13.799043] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.886 [2024-09-30 22:48:13.876678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.147 [2024-09-30 22:48:13.931932] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.147 [2024-09-30 22:48:13.931967] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.147 [2024-09-30 22:48:13.931973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.147 [2024-09-30 22:48:13.931977] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.147 [2024-09-30 22:48:13.931981] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.147 [2024-09-30 22:48:13.932117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.147 [2024-09-30 22:48:13.932267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.147 [2024-09-30 22:48:13.932416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.147 [2024-09-30 22:48:13.932419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.147 [2024-09-30 22:48:13.999996] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:47.147 [2024-09-30 22:48:14.001051] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:47.147 [2024-09-30 22:48:14.001547] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:47.147 [2024-09-30 22:48:14.002225] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:47.147 [2024-09-30 22:48:14.002264] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:47.718 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.718 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:47.718 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:48.660 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:48.920 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:48.920 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:48.920 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:48.920 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:48.920 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:49.181 Malloc1 00:19:49.181 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:49.181 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:49.442 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:49.703 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:49.703 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:49.703 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:49.962 Malloc2 00:19:49.962 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:49.962 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:50.222 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 670450 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 670450 ']' 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 670450 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670450 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670450' 00:19:50.483 killing process with pid 670450 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 670450 00:19:50.483 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 670450 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:50.746 00:19:50.746 real 0m50.786s 00:19:50.746 user 3m14.630s 00:19:50.746 sys 0m2.661s 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:50.746 ************************************ 00:19:50.746 END TEST nvmf_vfio_user 00:19:50.746 ************************************ 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.746 ************************************ 00:19:50.746 START TEST nvmf_vfio_user_nvme_compliance 00:19:50.746 ************************************ 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:50.746 * Looking for test storage... 00:19:50.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:19:50.746 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.009 --rc genhtml_branch_coverage=1 00:19:51.009 --rc genhtml_function_coverage=1 00:19:51.009 --rc genhtml_legend=1 00:19:51.009 --rc geninfo_all_blocks=1 00:19:51.009 --rc geninfo_unexecuted_blocks=1 00:19:51.009 00:19:51.009 ' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.009 --rc genhtml_branch_coverage=1 00:19:51.009 --rc genhtml_function_coverage=1 00:19:51.009 --rc genhtml_legend=1 00:19:51.009 --rc geninfo_all_blocks=1 00:19:51.009 --rc geninfo_unexecuted_blocks=1 00:19:51.009 00:19:51.009 ' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.009 --rc genhtml_branch_coverage=1 00:19:51.009 --rc genhtml_function_coverage=1 00:19:51.009 --rc genhtml_legend=1 00:19:51.009 --rc geninfo_all_blocks=1 00:19:51.009 --rc geninfo_unexecuted_blocks=1 00:19:51.009 00:19:51.009 ' 00:19:51.009 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.009 --rc genhtml_branch_coverage=1 00:19:51.009 --rc genhtml_function_coverage=1 00:19:51.009 --rc genhtml_legend=1 00:19:51.009 --rc geninfo_all_blocks=1 00:19:51.009 --rc geninfo_unexecuted_blocks=1 00:19:51.009 00:19:51.009 ' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=671215 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 671215' 00:19:51.010 Process pid: 671215 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 671215 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 671215 ']' 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.010 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:51.010 [2024-09-30 22:48:17.895422] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:19:51.010 [2024-09-30 22:48:17.895494] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.010 [2024-09-30 22:48:17.979158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.272 [2024-09-30 22:48:18.050943] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.272 [2024-09-30 22:48:18.050987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.272 [2024-09-30 22:48:18.050994] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.272 [2024-09-30 22:48:18.050998] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.272 [2024-09-30 22:48:18.051002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.272 [2024-09-30 22:48:18.051237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.272 [2024-09-30 22:48:18.051468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.272 [2024-09-30 22:48:18.051469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.845 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.845 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:51.845 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 malloc0 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.788 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.789 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:53.050 00:19:53.050 00:19:53.050 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.050 http://cunit.sourceforge.net/ 00:19:53.050 00:19:53.050 00:19:53.050 Suite: nvme_compliance 00:19:53.050 Test: admin_identify_ctrlr_verify_dptr ...[2024-09-30 22:48:19.929293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.050 [2024-09-30 22:48:19.930595] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:53.050 [2024-09-30 22:48:19.930607] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:53.050 [2024-09-30 22:48:19.930612] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:53.050 [2024-09-30 22:48:19.932310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.050 passed 00:19:53.050 Test: admin_identify_ctrlr_verify_fused ...[2024-09-30 22:48:20.008264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.050 [2024-09-30 22:48:20.011295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.050 passed 00:19:53.311 Test: admin_identify_ns ...[2024-09-30 22:48:20.091273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.311 [2024-09-30 22:48:20.151907] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:53.311 [2024-09-30 22:48:20.159907] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:53.311 [2024-09-30 22:48:20.180980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.311 passed 00:19:53.311 Test: admin_get_features_mandatory_features ...[2024-09-30 22:48:20.254204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.311 [2024-09-30 22:48:20.257228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.311 passed 00:19:53.571 Test: admin_get_features_optional_features ...[2024-09-30 22:48:20.334698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.571 [2024-09-30 22:48:20.337719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.571 passed 00:19:53.571 Test: admin_set_features_number_of_queues ...[2024-09-30 22:48:20.415289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.571 [2024-09-30 22:48:20.520986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.571 passed 00:19:53.831 Test: admin_get_log_page_mandatory_logs ...[2024-09-30 22:48:20.593199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.831 [2024-09-30 22:48:20.597229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.831 passed 00:19:53.831 Test: admin_get_log_page_with_lpo ...[2024-09-30 22:48:20.672255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.831 [2024-09-30 22:48:20.751904] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:53.831 [2024-09-30 22:48:20.764940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.831 passed 00:19:53.831 Test: fabric_property_get ...[2024-09-30 22:48:20.835123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.831 [2024-09-30 22:48:20.836320] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:53.831 [2024-09-30 22:48:20.838144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.091 passed 00:19:54.091 Test: admin_delete_io_sq_use_admin_qid ...[2024-09-30 22:48:20.914621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.091 [2024-09-30 22:48:20.915815] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:54.091 [2024-09-30 22:48:20.917646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.091 passed 00:19:54.091 Test: admin_delete_io_sq_delete_sq_twice ...[2024-09-30 22:48:20.994394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.091 [2024-09-30 22:48:21.078899] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:54.091 [2024-09-30 22:48:21.094898] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:54.091 [2024-09-30 22:48:21.099982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.353 passed 00:19:54.353 Test: admin_delete_io_cq_use_admin_qid ...[2024-09-30 22:48:21.174230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.353 [2024-09-30 22:48:21.175433] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:54.353 [2024-09-30 22:48:21.177249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.353 passed 00:19:54.353 Test: admin_delete_io_cq_delete_cq_first ...[2024-09-30 22:48:21.250965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.353 [2024-09-30 22:48:21.328902] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:54.353 [2024-09-30 22:48:21.352902] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:54.353 [2024-09-30 22:48:21.357967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.613 passed 00:19:54.613 Test: admin_create_io_cq_verify_iv_pc ...[2024-09-30 22:48:21.431185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.613 [2024-09-30 22:48:21.432386] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:54.613 [2024-09-30 22:48:21.432405] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:54.613 [2024-09-30 22:48:21.434206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.613 passed 00:19:54.613 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-09-30 22:48:21.509253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.613 [2024-09-30 22:48:21.604899] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:54.613 [2024-09-30 22:48:21.612907] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:54.613 [2024-09-30 22:48:21.620898] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:54.613 [2024-09-30 22:48:21.628900] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:54.874 [2024-09-30 22:48:21.657967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.874 passed 00:19:54.874 Test: admin_create_io_sq_verify_pc ...[2024-09-30 22:48:21.730179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.874 [2024-09-30 22:48:21.748910] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:54.874 [2024-09-30 22:48:21.766341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.874 passed 00:19:54.874 Test: admin_create_io_qp_max_qps ...[2024-09-30 22:48:21.841802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.340 [2024-09-30 22:48:22.959900] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:56.638 [2024-09-30 22:48:23.340130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.638 passed 00:19:56.638 Test: admin_create_io_sq_shared_cq ...[2024-09-30 22:48:23.413918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.638 [2024-09-30 22:48:23.547901] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:56.638 [2024-09-30 22:48:23.584947] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.638 passed 00:19:56.638 00:19:56.638 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.638 suites 1 1 n/a 0 0 00:19:56.638 tests 18 18 18 0 0 00:19:56.638 asserts 360 360 360 0 n/a 00:19:56.638 00:19:56.638 Elapsed time = 1.502 seconds 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 671215 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 671215 ']' 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 671215 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.638 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 671215 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 671215' 00:19:56.921 killing process with pid 671215 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 671215 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 671215 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:56.921 00:19:56.921 real 0m6.239s 00:19:56.921 user 0m17.552s 00:19:56.921 sys 0m0.551s 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:56.921 ************************************ 00:19:56.921 END TEST nvmf_vfio_user_nvme_compliance 00:19:56.921 ************************************ 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.921 ************************************ 00:19:56.921 START TEST nvmf_vfio_user_fuzz 00:19:56.921 ************************************ 00:19:56.921 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:57.210 * Looking for test storage... 00:19:57.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:57.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.210 --rc genhtml_branch_coverage=1 00:19:57.210 --rc genhtml_function_coverage=1 00:19:57.210 --rc genhtml_legend=1 00:19:57.210 --rc geninfo_all_blocks=1 00:19:57.210 --rc geninfo_unexecuted_blocks=1 00:19:57.210 00:19:57.210 ' 00:19:57.210 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:57.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.210 --rc genhtml_branch_coverage=1 00:19:57.210 --rc genhtml_function_coverage=1 00:19:57.210 --rc genhtml_legend=1 00:19:57.211 --rc geninfo_all_blocks=1 00:19:57.211 --rc geninfo_unexecuted_blocks=1 00:19:57.211 00:19:57.211 ' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.211 --rc genhtml_branch_coverage=1 00:19:57.211 --rc genhtml_function_coverage=1 00:19:57.211 --rc genhtml_legend=1 00:19:57.211 --rc geninfo_all_blocks=1 00:19:57.211 --rc geninfo_unexecuted_blocks=1 00:19:57.211 00:19:57.211 ' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.211 --rc genhtml_branch_coverage=1 00:19:57.211 --rc genhtml_function_coverage=1 00:19:57.211 --rc genhtml_legend=1 00:19:57.211 --rc geninfo_all_blocks=1 00:19:57.211 --rc geninfo_unexecuted_blocks=1 00:19:57.211 00:19:57.211 ' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=672616 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 672616' 00:19:57.211 Process pid: 672616 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 672616 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 672616 ']' 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.211 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:58.151 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.151 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:58.151 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.092 malloc0 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:59.092 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:31.210 Fuzzing completed. Shutting down the fuzz application 00:20:31.210 00:20:31.210 Dumping successful admin opcodes: 00:20:31.210 8, 9, 10, 24, 00:20:31.210 Dumping successful io opcodes: 00:20:31.210 0, 00:20:31.210 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1333723, total successful commands: 5224, random_seed: 4255431424 00:20:31.210 NS: 0x200003a1ef00 admin qp, Total commands completed: 297347, total successful commands: 2400, random_seed: 1554511168 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 672616 ']' 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 672616' 00:20:31.210 killing process with pid 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 672616 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:31.210 00:20:31.210 real 0m32.898s 00:20:31.210 user 0m37.718s 00:20:31.210 sys 0m23.841s 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.210 ************************************ 00:20:31.210 END TEST nvmf_vfio_user_fuzz 00:20:31.210 ************************************ 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:31.210 ************************************ 00:20:31.210 START TEST nvmf_auth_target 00:20:31.210 ************************************ 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:31.210 * Looking for test storage... 00:20:31.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:31.210 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:31.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.210 --rc genhtml_branch_coverage=1 00:20:31.210 --rc genhtml_function_coverage=1 00:20:31.210 --rc genhtml_legend=1 00:20:31.210 --rc geninfo_all_blocks=1 00:20:31.210 --rc geninfo_unexecuted_blocks=1 00:20:31.210 00:20:31.210 ' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:31.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.210 --rc genhtml_branch_coverage=1 00:20:31.210 --rc genhtml_function_coverage=1 00:20:31.210 --rc genhtml_legend=1 00:20:31.210 --rc geninfo_all_blocks=1 00:20:31.210 --rc geninfo_unexecuted_blocks=1 00:20:31.210 00:20:31.210 ' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:31.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.210 --rc genhtml_branch_coverage=1 00:20:31.210 --rc genhtml_function_coverage=1 00:20:31.210 --rc genhtml_legend=1 00:20:31.210 --rc geninfo_all_blocks=1 00:20:31.210 --rc geninfo_unexecuted_blocks=1 00:20:31.210 00:20:31.210 ' 00:20:31.210 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:31.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.211 --rc genhtml_branch_coverage=1 00:20:31.211 --rc genhtml_function_coverage=1 00:20:31.211 --rc genhtml_legend=1 00:20:31.211 --rc geninfo_all_blocks=1 00:20:31.211 --rc geninfo_unexecuted_blocks=1 00:20:31.211 00:20:31.211 ' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:31.211 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:37.804 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:37.804 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:37.804 Found net devices under 0000:31:00.0: cvl_0_0 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:37.804 Found net devices under 0000:31:00.1: cvl_0_1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.804 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:20:37.805 00:20:37.805 --- 10.0.0.2 ping statistics --- 00:20:37.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.805 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:20:37.805 00:20:37.805 --- 10.0.0.1 ping statistics --- 00:20:37.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.805 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=682673 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 682673 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 682673 ']' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.805 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=683012 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:38.748 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0f206f8281426a36b6de9531080659549978cfa8b687a038 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.tLg 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0f206f8281426a36b6de9531080659549978cfa8b687a038 0 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0f206f8281426a36b6de9531080659549978cfa8b687a038 0 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0f206f8281426a36b6de9531080659549978cfa8b687a038 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.tLg 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.tLg 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.tLg 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9b18a4b47f8d8838a777dc6d4aadd46e85dd5c5e21855c21eb7ddbce56d041b2 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.1Q3 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9b18a4b47f8d8838a777dc6d4aadd46e85dd5c5e21855c21eb7ddbce56d041b2 3 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9b18a4b47f8d8838a777dc6d4aadd46e85dd5c5e21855c21eb7ddbce56d041b2 3 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9b18a4b47f8d8838a777dc6d4aadd46e85dd5c5e21855c21eb7ddbce56d041b2 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:38.749 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.1Q3 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.1Q3 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1Q3 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e7a5e5ebeaddf27d3975b3265127b383 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.xik 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e7a5e5ebeaddf27d3975b3265127b383 1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e7a5e5ebeaddf27d3975b3265127b383 1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e7a5e5ebeaddf27d3975b3265127b383 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.xik 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.xik 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xik 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6bf517e362cc7239de765d9b1be3244c76fe294cb456cd84 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.9Hu 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6bf517e362cc7239de765d9b1be3244c76fe294cb456cd84 2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6bf517e362cc7239de765d9b1be3244c76fe294cb456cd84 2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6bf517e362cc7239de765d9b1be3244c76fe294cb456cd84 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.9Hu 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.9Hu 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.9Hu 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bb8c3b09bad56f09514f5c70995cbda8f40a7f0201e1bdf7 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.oPo 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bb8c3b09bad56f09514f5c70995cbda8f40a7f0201e1bdf7 2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bb8c3b09bad56f09514f5c70995cbda8f40a7f0201e1bdf7 2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bb8c3b09bad56f09514f5c70995cbda8f40a7f0201e1bdf7 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.oPo 00:20:39.012 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.oPo 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.oPo 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=2150c4b5be27531f4c848e1c2276b5da 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Dq1 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 2150c4b5be27531f4c848e1c2276b5da 1 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 2150c4b5be27531f4c848e1c2276b5da 1 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=2150c4b5be27531f4c848e1c2276b5da 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:39.013 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Dq1 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Dq1 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Dq1 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e44af5abb9fa8ecea7d652d7d44960086785c7471f8db6a9eb23ce58f9686098 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Qm6 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e44af5abb9fa8ecea7d652d7d44960086785c7471f8db6a9eb23ce58f9686098 3 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e44af5abb9fa8ecea7d652d7d44960086785c7471f8db6a9eb23ce58f9686098 3 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e44af5abb9fa8ecea7d652d7d44960086785c7471f8db6a9eb23ce58f9686098 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Qm6 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Qm6 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Qm6 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 682673 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 682673 ']' 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.274 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 683012 /var/tmp/host.sock 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 683012 ']' 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:39.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.536 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tLg 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tLg 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tLg 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1Q3 ]] 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Q3 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Q3 00:20:39.798 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Q3 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xik 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xik 00:20:40.060 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xik 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.9Hu ]] 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9Hu 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9Hu 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9Hu 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oPo 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oPo 00:20:40.322 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oPo 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Dq1 ]] 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dq1 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dq1 00:20:40.585 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dq1 00:20:40.847 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qm6 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Qm6 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Qm6 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:40.848 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.109 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.109 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.109 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.109 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.110 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.371 00:20:41.371 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.371 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.371 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.634 { 00:20:41.634 "cntlid": 1, 00:20:41.634 "qid": 0, 00:20:41.634 "state": "enabled", 00:20:41.634 "thread": "nvmf_tgt_poll_group_000", 00:20:41.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:41.634 "listen_address": { 00:20:41.634 "trtype": "TCP", 00:20:41.634 "adrfam": "IPv4", 00:20:41.634 "traddr": "10.0.0.2", 00:20:41.634 "trsvcid": "4420" 00:20:41.634 }, 00:20:41.634 "peer_address": { 00:20:41.634 "trtype": "TCP", 00:20:41.634 "adrfam": "IPv4", 00:20:41.634 "traddr": "10.0.0.1", 00:20:41.634 "trsvcid": "38182" 00:20:41.634 }, 00:20:41.634 "auth": { 00:20:41.634 "state": "completed", 00:20:41.634 "digest": "sha256", 00:20:41.634 "dhgroup": "null" 00:20:41.634 } 00:20:41.634 } 00:20:41.634 ]' 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.634 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.896 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:41.896 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:42.468 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.729 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.991 00:20:42.991 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.991 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.991 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.252 { 00:20:43.252 "cntlid": 3, 00:20:43.252 "qid": 0, 00:20:43.252 "state": "enabled", 00:20:43.252 "thread": "nvmf_tgt_poll_group_000", 00:20:43.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:43.252 "listen_address": { 00:20:43.252 "trtype": "TCP", 00:20:43.252 "adrfam": "IPv4", 00:20:43.252 "traddr": "10.0.0.2", 00:20:43.252 "trsvcid": "4420" 00:20:43.252 }, 00:20:43.252 "peer_address": { 00:20:43.252 "trtype": "TCP", 00:20:43.252 "adrfam": "IPv4", 00:20:43.252 "traddr": "10.0.0.1", 00:20:43.252 "trsvcid": "48782" 00:20:43.252 }, 00:20:43.252 "auth": { 00:20:43.252 "state": "completed", 00:20:43.252 "digest": "sha256", 00:20:43.252 "dhgroup": "null" 00:20:43.252 } 00:20:43.252 } 00:20:43.252 ]' 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.252 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.513 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:43.513 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.084 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.345 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.605 00:20:44.605 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.605 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.605 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.866 { 00:20:44.866 "cntlid": 5, 00:20:44.866 "qid": 0, 00:20:44.866 "state": "enabled", 00:20:44.866 "thread": "nvmf_tgt_poll_group_000", 00:20:44.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:44.866 "listen_address": { 00:20:44.866 "trtype": "TCP", 00:20:44.866 "adrfam": "IPv4", 00:20:44.866 "traddr": "10.0.0.2", 00:20:44.866 "trsvcid": "4420" 00:20:44.866 }, 00:20:44.866 "peer_address": { 00:20:44.866 "trtype": "TCP", 00:20:44.866 "adrfam": "IPv4", 00:20:44.866 "traddr": "10.0.0.1", 00:20:44.866 "trsvcid": "48820" 00:20:44.866 }, 00:20:44.866 "auth": { 00:20:44.866 "state": "completed", 00:20:44.866 "digest": "sha256", 00:20:44.866 "dhgroup": "null" 00:20:44.866 } 00:20:44.866 } 00:20:44.866 ]' 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.866 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.131 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:45.131 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.707 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.968 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.969 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.969 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.969 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.969 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.229 00:20:46.229 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.229 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.229 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.491 { 00:20:46.491 "cntlid": 7, 00:20:46.491 "qid": 0, 00:20:46.491 "state": "enabled", 00:20:46.491 "thread": "nvmf_tgt_poll_group_000", 00:20:46.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.491 "listen_address": { 00:20:46.491 "trtype": "TCP", 00:20:46.491 "adrfam": "IPv4", 00:20:46.491 "traddr": "10.0.0.2", 00:20:46.491 "trsvcid": "4420" 00:20:46.491 }, 00:20:46.491 "peer_address": { 00:20:46.491 "trtype": "TCP", 00:20:46.491 "adrfam": "IPv4", 00:20:46.491 "traddr": "10.0.0.1", 00:20:46.491 "trsvcid": "48848" 00:20:46.491 }, 00:20:46.491 "auth": { 00:20:46.491 "state": "completed", 00:20:46.491 "digest": "sha256", 00:20:46.491 "dhgroup": "null" 00:20:46.491 } 00:20:46.491 } 00:20:46.491 ]' 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.491 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.751 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:20:46.751 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.322 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.583 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.844 00:20:47.844 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.844 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.844 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.844 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.104 { 00:20:48.104 "cntlid": 9, 00:20:48.104 "qid": 0, 00:20:48.104 "state": "enabled", 00:20:48.104 "thread": "nvmf_tgt_poll_group_000", 00:20:48.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:48.104 "listen_address": { 00:20:48.104 "trtype": "TCP", 00:20:48.104 "adrfam": "IPv4", 00:20:48.104 "traddr": "10.0.0.2", 00:20:48.104 "trsvcid": "4420" 00:20:48.104 }, 00:20:48.104 "peer_address": { 00:20:48.104 "trtype": "TCP", 00:20:48.104 "adrfam": "IPv4", 00:20:48.104 "traddr": "10.0.0.1", 00:20:48.104 "trsvcid": "48868" 00:20:48.104 }, 00:20:48.104 "auth": { 00:20:48.104 "state": "completed", 00:20:48.104 "digest": "sha256", 00:20:48.104 "dhgroup": "ffdhe2048" 00:20:48.104 } 00:20:48.104 } 00:20:48.104 ]' 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.104 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.104 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.104 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.104 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.365 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:48.365 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.935 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.195 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.455 00:20:49.455 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.455 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.455 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.716 { 00:20:49.716 "cntlid": 11, 00:20:49.716 "qid": 0, 00:20:49.716 "state": "enabled", 00:20:49.716 "thread": "nvmf_tgt_poll_group_000", 00:20:49.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:49.716 "listen_address": { 00:20:49.716 "trtype": "TCP", 00:20:49.716 "adrfam": "IPv4", 00:20:49.716 "traddr": "10.0.0.2", 00:20:49.716 "trsvcid": "4420" 00:20:49.716 }, 00:20:49.716 "peer_address": { 00:20:49.716 "trtype": "TCP", 00:20:49.716 "adrfam": "IPv4", 00:20:49.716 "traddr": "10.0.0.1", 00:20:49.716 "trsvcid": "48892" 00:20:49.716 }, 00:20:49.716 "auth": { 00:20:49.716 "state": "completed", 00:20:49.716 "digest": "sha256", 00:20:49.716 "dhgroup": "ffdhe2048" 00:20:49.716 } 00:20:49.716 } 00:20:49.716 ]' 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.716 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.976 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:49.976 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:50.546 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.806 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.067 00:20:51.067 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.067 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.068 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.328 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.328 { 00:20:51.328 "cntlid": 13, 00:20:51.328 "qid": 0, 00:20:51.328 "state": "enabled", 00:20:51.328 "thread": "nvmf_tgt_poll_group_000", 00:20:51.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:51.328 "listen_address": { 00:20:51.328 "trtype": "TCP", 00:20:51.328 "adrfam": "IPv4", 00:20:51.328 "traddr": "10.0.0.2", 00:20:51.328 "trsvcid": "4420" 00:20:51.328 }, 00:20:51.328 "peer_address": { 00:20:51.328 "trtype": "TCP", 00:20:51.328 "adrfam": "IPv4", 00:20:51.328 "traddr": "10.0.0.1", 00:20:51.328 "trsvcid": "48912" 00:20:51.328 }, 00:20:51.328 "auth": { 00:20:51.328 "state": "completed", 00:20:51.328 "digest": "sha256", 00:20:51.328 "dhgroup": "ffdhe2048" 00:20:51.329 } 00:20:51.329 } 00:20:51.329 ]' 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.329 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.590 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:51.590 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.161 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.421 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.422 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.422 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.682 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.682 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.943 { 00:20:52.943 "cntlid": 15, 00:20:52.943 "qid": 0, 00:20:52.943 "state": "enabled", 00:20:52.943 "thread": "nvmf_tgt_poll_group_000", 00:20:52.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:52.943 "listen_address": { 00:20:52.943 "trtype": "TCP", 00:20:52.943 "adrfam": "IPv4", 00:20:52.943 "traddr": "10.0.0.2", 00:20:52.943 "trsvcid": "4420" 00:20:52.943 }, 00:20:52.943 "peer_address": { 00:20:52.943 "trtype": "TCP", 00:20:52.943 "adrfam": "IPv4", 00:20:52.943 "traddr": "10.0.0.1", 00:20:52.943 "trsvcid": "52038" 00:20:52.943 }, 00:20:52.943 "auth": { 00:20:52.943 "state": "completed", 00:20:52.943 "digest": "sha256", 00:20:52.943 "dhgroup": "ffdhe2048" 00:20:52.943 } 00:20:52.943 } 00:20:52.943 ]' 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.943 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.204 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:20:53.204 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.774 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.035 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.295 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.295 { 00:20:54.295 "cntlid": 17, 00:20:54.295 "qid": 0, 00:20:54.295 "state": "enabled", 00:20:54.295 "thread": "nvmf_tgt_poll_group_000", 00:20:54.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.295 "listen_address": { 00:20:54.295 "trtype": "TCP", 00:20:54.295 "adrfam": "IPv4", 00:20:54.295 "traddr": "10.0.0.2", 00:20:54.295 "trsvcid": "4420" 00:20:54.295 }, 00:20:54.295 "peer_address": { 00:20:54.295 "trtype": "TCP", 00:20:54.295 "adrfam": "IPv4", 00:20:54.295 "traddr": "10.0.0.1", 00:20:54.295 "trsvcid": "52058" 00:20:54.295 }, 00:20:54.295 "auth": { 00:20:54.295 "state": "completed", 00:20:54.295 "digest": "sha256", 00:20:54.295 "dhgroup": "ffdhe3072" 00:20:54.295 } 00:20:54.295 } 00:20:54.295 ]' 00:20:54.295 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.555 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.815 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:54.815 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.386 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.647 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.648 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.910 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.910 { 00:20:55.910 "cntlid": 19, 00:20:55.910 "qid": 0, 00:20:55.910 "state": "enabled", 00:20:55.910 "thread": "nvmf_tgt_poll_group_000", 00:20:55.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:55.910 "listen_address": { 00:20:55.910 "trtype": "TCP", 00:20:55.910 "adrfam": "IPv4", 00:20:55.910 "traddr": "10.0.0.2", 00:20:55.910 "trsvcid": "4420" 00:20:55.910 }, 00:20:55.910 "peer_address": { 00:20:55.910 "trtype": "TCP", 00:20:55.910 "adrfam": "IPv4", 00:20:55.910 "traddr": "10.0.0.1", 00:20:55.910 "trsvcid": "52084" 00:20:55.910 }, 00:20:55.910 "auth": { 00:20:55.910 "state": "completed", 00:20:55.910 "digest": "sha256", 00:20:55.910 "dhgroup": "ffdhe3072" 00:20:55.910 } 00:20:55.910 } 00:20:55.910 ]' 00:20:55.910 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.170 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.170 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.170 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.170 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.170 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.170 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.170 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.430 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:56.430 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:57.001 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:57.261 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:57.261 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.261 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.261 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.261 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.262 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.522 00:20:57.522 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.522 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.522 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.783 { 00:20:57.783 "cntlid": 21, 00:20:57.783 "qid": 0, 00:20:57.783 "state": "enabled", 00:20:57.783 "thread": "nvmf_tgt_poll_group_000", 00:20:57.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:57.783 "listen_address": { 00:20:57.783 "trtype": "TCP", 00:20:57.783 "adrfam": "IPv4", 00:20:57.783 "traddr": "10.0.0.2", 00:20:57.783 "trsvcid": "4420" 00:20:57.783 }, 00:20:57.783 "peer_address": { 00:20:57.783 "trtype": "TCP", 00:20:57.783 "adrfam": "IPv4", 00:20:57.783 "traddr": "10.0.0.1", 00:20:57.783 "trsvcid": "52124" 00:20:57.783 }, 00:20:57.783 "auth": { 00:20:57.783 "state": "completed", 00:20:57.783 "digest": "sha256", 00:20:57.783 "dhgroup": "ffdhe3072" 00:20:57.783 } 00:20:57.783 } 00:20:57.783 ]' 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.783 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.043 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:58.043 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:58.708 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.969 22:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.969 00:20:59.230 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.230 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.231 { 00:20:59.231 "cntlid": 23, 00:20:59.231 "qid": 0, 00:20:59.231 "state": "enabled", 00:20:59.231 "thread": "nvmf_tgt_poll_group_000", 00:20:59.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:59.231 "listen_address": { 00:20:59.231 "trtype": "TCP", 00:20:59.231 "adrfam": "IPv4", 00:20:59.231 "traddr": "10.0.0.2", 00:20:59.231 "trsvcid": "4420" 00:20:59.231 }, 00:20:59.231 "peer_address": { 00:20:59.231 "trtype": "TCP", 00:20:59.231 "adrfam": "IPv4", 00:20:59.231 "traddr": "10.0.0.1", 00:20:59.231 "trsvcid": "52142" 00:20:59.231 }, 00:20:59.231 "auth": { 00:20:59.231 "state": "completed", 00:20:59.231 "digest": "sha256", 00:20:59.231 "dhgroup": "ffdhe3072" 00:20:59.231 } 00:20:59.231 } 00:20:59.231 ]' 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.231 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:20:59.492 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.433 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.434 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.434 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.434 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.434 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.434 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.694 00:21:00.694 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.694 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.694 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.954 { 00:21:00.954 "cntlid": 25, 00:21:00.954 "qid": 0, 00:21:00.954 "state": "enabled", 00:21:00.954 "thread": "nvmf_tgt_poll_group_000", 00:21:00.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:00.954 "listen_address": { 00:21:00.954 "trtype": "TCP", 00:21:00.954 "adrfam": "IPv4", 00:21:00.954 "traddr": "10.0.0.2", 00:21:00.954 "trsvcid": "4420" 00:21:00.954 }, 00:21:00.954 "peer_address": { 00:21:00.954 "trtype": "TCP", 00:21:00.954 "adrfam": "IPv4", 00:21:00.954 "traddr": "10.0.0.1", 00:21:00.954 "trsvcid": "52156" 00:21:00.954 }, 00:21:00.954 "auth": { 00:21:00.954 "state": "completed", 00:21:00.954 "digest": "sha256", 00:21:00.954 "dhgroup": "ffdhe4096" 00:21:00.954 } 00:21:00.954 } 00:21:00.954 ]' 00:21:00.954 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.955 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.215 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:01.215 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.787 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.047 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.308 00:21:02.308 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.308 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.308 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.568 { 00:21:02.568 "cntlid": 27, 00:21:02.568 "qid": 0, 00:21:02.568 "state": "enabled", 00:21:02.568 "thread": "nvmf_tgt_poll_group_000", 00:21:02.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:02.568 "listen_address": { 00:21:02.568 "trtype": "TCP", 00:21:02.568 "adrfam": "IPv4", 00:21:02.568 "traddr": "10.0.0.2", 00:21:02.568 "trsvcid": "4420" 00:21:02.568 }, 00:21:02.568 "peer_address": { 00:21:02.568 "trtype": "TCP", 00:21:02.568 "adrfam": "IPv4", 00:21:02.568 "traddr": "10.0.0.1", 00:21:02.568 "trsvcid": "36268" 00:21:02.568 }, 00:21:02.568 "auth": { 00:21:02.568 "state": "completed", 00:21:02.568 "digest": "sha256", 00:21:02.568 "dhgroup": "ffdhe4096" 00:21:02.568 } 00:21:02.568 } 00:21:02.568 ]' 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.568 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.828 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:02.828 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:03.472 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.732 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.991 00:21:03.991 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.991 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.991 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.251 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.251 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.251 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.252 { 00:21:04.252 "cntlid": 29, 00:21:04.252 "qid": 0, 00:21:04.252 "state": "enabled", 00:21:04.252 "thread": "nvmf_tgt_poll_group_000", 00:21:04.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.252 "listen_address": { 00:21:04.252 "trtype": "TCP", 00:21:04.252 "adrfam": "IPv4", 00:21:04.252 "traddr": "10.0.0.2", 00:21:04.252 "trsvcid": "4420" 00:21:04.252 }, 00:21:04.252 "peer_address": { 00:21:04.252 "trtype": "TCP", 00:21:04.252 "adrfam": "IPv4", 00:21:04.252 "traddr": "10.0.0.1", 00:21:04.252 "trsvcid": "36292" 00:21:04.252 }, 00:21:04.252 "auth": { 00:21:04.252 "state": "completed", 00:21:04.252 "digest": "sha256", 00:21:04.252 "dhgroup": "ffdhe4096" 00:21:04.252 } 00:21:04.252 } 00:21:04.252 ]' 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.252 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.512 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:04.512 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:05.083 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:05.343 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.344 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.344 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.344 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.344 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.344 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.603 00:21:05.603 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.603 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.603 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.863 { 00:21:05.863 "cntlid": 31, 00:21:05.863 "qid": 0, 00:21:05.863 "state": "enabled", 00:21:05.863 "thread": "nvmf_tgt_poll_group_000", 00:21:05.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:05.863 "listen_address": { 00:21:05.863 "trtype": "TCP", 00:21:05.863 "adrfam": "IPv4", 00:21:05.863 "traddr": "10.0.0.2", 00:21:05.863 "trsvcid": "4420" 00:21:05.863 }, 00:21:05.863 "peer_address": { 00:21:05.863 "trtype": "TCP", 00:21:05.863 "adrfam": "IPv4", 00:21:05.863 "traddr": "10.0.0.1", 00:21:05.863 "trsvcid": "36330" 00:21:05.863 }, 00:21:05.863 "auth": { 00:21:05.863 "state": "completed", 00:21:05.863 "digest": "sha256", 00:21:05.863 "dhgroup": "ffdhe4096" 00:21:05.863 } 00:21:05.863 } 00:21:05.863 ]' 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.863 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.864 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.864 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.864 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.864 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.125 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:06.125 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.695 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.696 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.696 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.696 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.955 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.215 00:21:07.215 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.215 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.215 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.475 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.476 { 00:21:07.476 "cntlid": 33, 00:21:07.476 "qid": 0, 00:21:07.476 "state": "enabled", 00:21:07.476 "thread": "nvmf_tgt_poll_group_000", 00:21:07.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:07.476 "listen_address": { 00:21:07.476 "trtype": "TCP", 00:21:07.476 "adrfam": "IPv4", 00:21:07.476 "traddr": "10.0.0.2", 00:21:07.476 "trsvcid": "4420" 00:21:07.476 }, 00:21:07.476 "peer_address": { 00:21:07.476 "trtype": "TCP", 00:21:07.476 "adrfam": "IPv4", 00:21:07.476 "traddr": "10.0.0.1", 00:21:07.476 "trsvcid": "36344" 00:21:07.476 }, 00:21:07.476 "auth": { 00:21:07.476 "state": "completed", 00:21:07.476 "digest": "sha256", 00:21:07.476 "dhgroup": "ffdhe6144" 00:21:07.476 } 00:21:07.476 } 00:21:07.476 ]' 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.476 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.737 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:07.737 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:08.307 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.567 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.138 00:21:09.138 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.138 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.138 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.138 { 00:21:09.138 "cntlid": 35, 00:21:09.138 "qid": 0, 00:21:09.138 "state": "enabled", 00:21:09.138 "thread": "nvmf_tgt_poll_group_000", 00:21:09.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.138 "listen_address": { 00:21:09.138 "trtype": "TCP", 00:21:09.138 "adrfam": "IPv4", 00:21:09.138 "traddr": "10.0.0.2", 00:21:09.138 "trsvcid": "4420" 00:21:09.138 }, 00:21:09.138 "peer_address": { 00:21:09.138 "trtype": "TCP", 00:21:09.138 "adrfam": "IPv4", 00:21:09.138 "traddr": "10.0.0.1", 00:21:09.138 "trsvcid": "36372" 00:21:09.138 }, 00:21:09.138 "auth": { 00:21:09.138 "state": "completed", 00:21:09.138 "digest": "sha256", 00:21:09.138 "dhgroup": "ffdhe6144" 00:21:09.138 } 00:21:09.138 } 00:21:09.138 ]' 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.138 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:09.399 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.340 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.599 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.858 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.858 { 00:21:10.858 "cntlid": 37, 00:21:10.858 "qid": 0, 00:21:10.858 "state": "enabled", 00:21:10.858 "thread": "nvmf_tgt_poll_group_000", 00:21:10.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:10.858 "listen_address": { 00:21:10.858 "trtype": "TCP", 00:21:10.858 "adrfam": "IPv4", 00:21:10.858 "traddr": "10.0.0.2", 00:21:10.858 "trsvcid": "4420" 00:21:10.858 }, 00:21:10.858 "peer_address": { 00:21:10.859 "trtype": "TCP", 00:21:10.859 "adrfam": "IPv4", 00:21:10.859 "traddr": "10.0.0.1", 00:21:10.859 "trsvcid": "36408" 00:21:10.859 }, 00:21:10.859 "auth": { 00:21:10.859 "state": "completed", 00:21:10.859 "digest": "sha256", 00:21:10.859 "dhgroup": "ffdhe6144" 00:21:10.859 } 00:21:10.859 } 00:21:10.859 ]' 00:21:10.859 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.119 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.378 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:11.378 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:11.947 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.209 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.470 00:21:12.470 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.470 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.470 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.731 { 00:21:12.731 "cntlid": 39, 00:21:12.731 "qid": 0, 00:21:12.731 "state": "enabled", 00:21:12.731 "thread": "nvmf_tgt_poll_group_000", 00:21:12.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:12.731 "listen_address": { 00:21:12.731 "trtype": "TCP", 00:21:12.731 "adrfam": "IPv4", 00:21:12.731 "traddr": "10.0.0.2", 00:21:12.731 "trsvcid": "4420" 00:21:12.731 }, 00:21:12.731 "peer_address": { 00:21:12.731 "trtype": "TCP", 00:21:12.731 "adrfam": "IPv4", 00:21:12.731 "traddr": "10.0.0.1", 00:21:12.731 "trsvcid": "55786" 00:21:12.731 }, 00:21:12.731 "auth": { 00:21:12.731 "state": "completed", 00:21:12.731 "digest": "sha256", 00:21:12.731 "dhgroup": "ffdhe6144" 00:21:12.731 } 00:21:12.731 } 00:21:12.731 ]' 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.731 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.992 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:12.992 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.562 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.823 22:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.393 00:21:14.393 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.393 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.393 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.654 { 00:21:14.654 "cntlid": 41, 00:21:14.654 "qid": 0, 00:21:14.654 "state": "enabled", 00:21:14.654 "thread": "nvmf_tgt_poll_group_000", 00:21:14.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.654 "listen_address": { 00:21:14.654 "trtype": "TCP", 00:21:14.654 "adrfam": "IPv4", 00:21:14.654 "traddr": "10.0.0.2", 00:21:14.654 "trsvcid": "4420" 00:21:14.654 }, 00:21:14.654 "peer_address": { 00:21:14.654 "trtype": "TCP", 00:21:14.654 "adrfam": "IPv4", 00:21:14.654 "traddr": "10.0.0.1", 00:21:14.654 "trsvcid": "55818" 00:21:14.654 }, 00:21:14.654 "auth": { 00:21:14.654 "state": "completed", 00:21:14.654 "digest": "sha256", 00:21:14.654 "dhgroup": "ffdhe8192" 00:21:14.654 } 00:21:14.654 } 00:21:14.654 ]' 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.654 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.913 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:14.914 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:15.483 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.484 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.743 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.313 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.313 { 00:21:16.313 "cntlid": 43, 00:21:16.313 "qid": 0, 00:21:16.313 "state": "enabled", 00:21:16.313 "thread": "nvmf_tgt_poll_group_000", 00:21:16.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.313 "listen_address": { 00:21:16.313 "trtype": "TCP", 00:21:16.313 "adrfam": "IPv4", 00:21:16.313 "traddr": "10.0.0.2", 00:21:16.313 "trsvcid": "4420" 00:21:16.313 }, 00:21:16.313 "peer_address": { 00:21:16.313 "trtype": "TCP", 00:21:16.313 "adrfam": "IPv4", 00:21:16.313 "traddr": "10.0.0.1", 00:21:16.313 "trsvcid": "55850" 00:21:16.313 }, 00:21:16.313 "auth": { 00:21:16.313 "state": "completed", 00:21:16.313 "digest": "sha256", 00:21:16.313 "dhgroup": "ffdhe8192" 00:21:16.313 } 00:21:16.313 } 00:21:16.313 ]' 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.313 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.314 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:16.574 22:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.514 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.515 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.515 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.085 00:21:18.085 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.085 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.085 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.085 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.085 { 00:21:18.085 "cntlid": 45, 00:21:18.085 "qid": 0, 00:21:18.085 "state": "enabled", 00:21:18.085 "thread": "nvmf_tgt_poll_group_000", 00:21:18.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.085 "listen_address": { 00:21:18.085 "trtype": "TCP", 00:21:18.085 "adrfam": "IPv4", 00:21:18.085 "traddr": "10.0.0.2", 00:21:18.085 "trsvcid": "4420" 00:21:18.085 }, 00:21:18.085 "peer_address": { 00:21:18.085 "trtype": "TCP", 00:21:18.085 "adrfam": "IPv4", 00:21:18.085 "traddr": "10.0.0.1", 00:21:18.085 "trsvcid": "55876" 00:21:18.085 }, 00:21:18.085 "auth": { 00:21:18.085 "state": "completed", 00:21:18.085 "digest": "sha256", 00:21:18.085 "dhgroup": "ffdhe8192" 00:21:18.085 } 00:21:18.085 } 00:21:18.085 ]' 00:21:18.086 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.346 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.606 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:18.606 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:19.176 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.177 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.436 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.437 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.007 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.007 { 00:21:20.007 "cntlid": 47, 00:21:20.007 "qid": 0, 00:21:20.007 "state": "enabled", 00:21:20.007 "thread": "nvmf_tgt_poll_group_000", 00:21:20.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:20.007 "listen_address": { 00:21:20.007 "trtype": "TCP", 00:21:20.007 "adrfam": "IPv4", 00:21:20.007 "traddr": "10.0.0.2", 00:21:20.007 "trsvcid": "4420" 00:21:20.007 }, 00:21:20.007 "peer_address": { 00:21:20.007 "trtype": "TCP", 00:21:20.007 "adrfam": "IPv4", 00:21:20.007 "traddr": "10.0.0.1", 00:21:20.007 "trsvcid": "55898" 00:21:20.007 }, 00:21:20.007 "auth": { 00:21:20.007 "state": "completed", 00:21:20.007 "digest": "sha256", 00:21:20.007 "dhgroup": "ffdhe8192" 00:21:20.007 } 00:21:20.007 } 00:21:20.007 ]' 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.007 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.007 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.007 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.268 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.268 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.268 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.268 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:20.268 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:20.839 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.101 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.101 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.362 00:21:21.362 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.362 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.362 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.624 { 00:21:21.624 "cntlid": 49, 00:21:21.624 "qid": 0, 00:21:21.624 "state": "enabled", 00:21:21.624 "thread": "nvmf_tgt_poll_group_000", 00:21:21.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.624 "listen_address": { 00:21:21.624 "trtype": "TCP", 00:21:21.624 "adrfam": "IPv4", 00:21:21.624 "traddr": "10.0.0.2", 00:21:21.624 "trsvcid": "4420" 00:21:21.624 }, 00:21:21.624 "peer_address": { 00:21:21.624 "trtype": "TCP", 00:21:21.624 "adrfam": "IPv4", 00:21:21.624 "traddr": "10.0.0.1", 00:21:21.624 "trsvcid": "55908" 00:21:21.624 }, 00:21:21.624 "auth": { 00:21:21.624 "state": "completed", 00:21:21.624 "digest": "sha384", 00:21:21.624 "dhgroup": "null" 00:21:21.624 } 00:21:21.624 } 00:21:21.624 ]' 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.624 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.884 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:21.885 22:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:22.456 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.456 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.456 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.456 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.717 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.978 00:21:22.978 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.978 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.978 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.240 { 00:21:23.240 "cntlid": 51, 00:21:23.240 "qid": 0, 00:21:23.240 "state": "enabled", 00:21:23.240 "thread": "nvmf_tgt_poll_group_000", 00:21:23.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.240 "listen_address": { 00:21:23.240 "trtype": "TCP", 00:21:23.240 "adrfam": "IPv4", 00:21:23.240 "traddr": "10.0.0.2", 00:21:23.240 "trsvcid": "4420" 00:21:23.240 }, 00:21:23.240 "peer_address": { 00:21:23.240 "trtype": "TCP", 00:21:23.240 "adrfam": "IPv4", 00:21:23.240 "traddr": "10.0.0.1", 00:21:23.240 "trsvcid": "44150" 00:21:23.240 }, 00:21:23.240 "auth": { 00:21:23.240 "state": "completed", 00:21:23.240 "digest": "sha384", 00:21:23.240 "dhgroup": "null" 00:21:23.240 } 00:21:23.240 } 00:21:23.240 ]' 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.240 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.501 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:23.501 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:24.072 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.072 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.072 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.072 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.333 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.595 00:21:24.595 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.595 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.595 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.857 { 00:21:24.857 "cntlid": 53, 00:21:24.857 "qid": 0, 00:21:24.857 "state": "enabled", 00:21:24.857 "thread": "nvmf_tgt_poll_group_000", 00:21:24.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:24.857 "listen_address": { 00:21:24.857 "trtype": "TCP", 00:21:24.857 "adrfam": "IPv4", 00:21:24.857 "traddr": "10.0.0.2", 00:21:24.857 "trsvcid": "4420" 00:21:24.857 }, 00:21:24.857 "peer_address": { 00:21:24.857 "trtype": "TCP", 00:21:24.857 "adrfam": "IPv4", 00:21:24.857 "traddr": "10.0.0.1", 00:21:24.857 "trsvcid": "44172" 00:21:24.857 }, 00:21:24.857 "auth": { 00:21:24.857 "state": "completed", 00:21:24.857 "digest": "sha384", 00:21:24.857 "dhgroup": "null" 00:21:24.857 } 00:21:24.857 } 00:21:24.857 ]' 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.857 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.118 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:25.118 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.061 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.062 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.323 00:21:26.323 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.323 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.323 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.584 { 00:21:26.584 "cntlid": 55, 00:21:26.584 "qid": 0, 00:21:26.584 "state": "enabled", 00:21:26.584 "thread": "nvmf_tgt_poll_group_000", 00:21:26.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:26.584 "listen_address": { 00:21:26.584 "trtype": "TCP", 00:21:26.584 "adrfam": "IPv4", 00:21:26.584 "traddr": "10.0.0.2", 00:21:26.584 "trsvcid": "4420" 00:21:26.584 }, 00:21:26.584 "peer_address": { 00:21:26.584 "trtype": "TCP", 00:21:26.584 "adrfam": "IPv4", 00:21:26.584 "traddr": "10.0.0.1", 00:21:26.584 "trsvcid": "44196" 00:21:26.584 }, 00:21:26.584 "auth": { 00:21:26.584 "state": "completed", 00:21:26.584 "digest": "sha384", 00:21:26.584 "dhgroup": "null" 00:21:26.584 } 00:21:26.584 } 00:21:26.584 ]' 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.584 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.845 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:26.845 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.416 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.678 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.939 00:21:27.939 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.939 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.939 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.200 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.200 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.200 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.200 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.200 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.200 { 00:21:28.200 "cntlid": 57, 00:21:28.200 "qid": 0, 00:21:28.200 "state": "enabled", 00:21:28.200 "thread": "nvmf_tgt_poll_group_000", 00:21:28.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:28.200 "listen_address": { 00:21:28.200 "trtype": "TCP", 00:21:28.200 "adrfam": "IPv4", 00:21:28.200 "traddr": "10.0.0.2", 00:21:28.200 "trsvcid": "4420" 00:21:28.200 }, 00:21:28.200 "peer_address": { 00:21:28.200 "trtype": "TCP", 00:21:28.200 "adrfam": "IPv4", 00:21:28.200 "traddr": "10.0.0.1", 00:21:28.200 "trsvcid": "44230" 00:21:28.200 }, 00:21:28.200 "auth": { 00:21:28.200 "state": "completed", 00:21:28.200 "digest": "sha384", 00:21:28.200 "dhgroup": "ffdhe2048" 00:21:28.200 } 00:21:28.200 } 00:21:28.200 ]' 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.200 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.201 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.201 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.201 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.462 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:28.462 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:29.034 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.295 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.555 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.555 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.819 { 00:21:29.819 "cntlid": 59, 00:21:29.819 "qid": 0, 00:21:29.819 "state": "enabled", 00:21:29.819 "thread": "nvmf_tgt_poll_group_000", 00:21:29.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:29.819 "listen_address": { 00:21:29.819 "trtype": "TCP", 00:21:29.819 "adrfam": "IPv4", 00:21:29.819 "traddr": "10.0.0.2", 00:21:29.819 "trsvcid": "4420" 00:21:29.819 }, 00:21:29.819 "peer_address": { 00:21:29.819 "trtype": "TCP", 00:21:29.819 "adrfam": "IPv4", 00:21:29.819 "traddr": "10.0.0.1", 00:21:29.819 "trsvcid": "44248" 00:21:29.819 }, 00:21:29.819 "auth": { 00:21:29.819 "state": "completed", 00:21:29.819 "digest": "sha384", 00:21:29.819 "dhgroup": "ffdhe2048" 00:21:29.819 } 00:21:29.819 } 00:21:29.819 ]' 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.819 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.081 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:30.081 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:30.653 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.914 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.176 00:21:31.176 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.176 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.176 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.437 { 00:21:31.437 "cntlid": 61, 00:21:31.437 "qid": 0, 00:21:31.437 "state": "enabled", 00:21:31.437 "thread": "nvmf_tgt_poll_group_000", 00:21:31.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.437 "listen_address": { 00:21:31.437 "trtype": "TCP", 00:21:31.437 "adrfam": "IPv4", 00:21:31.437 "traddr": "10.0.0.2", 00:21:31.437 "trsvcid": "4420" 00:21:31.437 }, 00:21:31.437 "peer_address": { 00:21:31.437 "trtype": "TCP", 00:21:31.437 "adrfam": "IPv4", 00:21:31.437 "traddr": "10.0.0.1", 00:21:31.437 "trsvcid": "44280" 00:21:31.437 }, 00:21:31.437 "auth": { 00:21:31.437 "state": "completed", 00:21:31.437 "digest": "sha384", 00:21:31.437 "dhgroup": "ffdhe2048" 00:21:31.437 } 00:21:31.437 } 00:21:31.437 ]' 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.437 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.699 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:31.699 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:32.269 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.530 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.791 00:21:32.791 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.791 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.791 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.052 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.052 { 00:21:33.052 "cntlid": 63, 00:21:33.052 "qid": 0, 00:21:33.052 "state": "enabled", 00:21:33.052 "thread": "nvmf_tgt_poll_group_000", 00:21:33.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.052 "listen_address": { 00:21:33.052 "trtype": "TCP", 00:21:33.052 "adrfam": "IPv4", 00:21:33.052 "traddr": "10.0.0.2", 00:21:33.052 "trsvcid": "4420" 00:21:33.052 }, 00:21:33.052 "peer_address": { 00:21:33.052 "trtype": "TCP", 00:21:33.052 "adrfam": "IPv4", 00:21:33.052 "traddr": "10.0.0.1", 00:21:33.052 "trsvcid": "42518" 00:21:33.052 }, 00:21:33.052 "auth": { 00:21:33.052 "state": "completed", 00:21:33.052 "digest": "sha384", 00:21:33.052 "dhgroup": "ffdhe2048" 00:21:33.052 } 00:21:33.053 } 00:21:33.053 ]' 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.053 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.314 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:33.314 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.886 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.147 22:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.407 00:21:34.407 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.407 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.407 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.668 { 00:21:34.668 "cntlid": 65, 00:21:34.668 "qid": 0, 00:21:34.668 "state": "enabled", 00:21:34.668 "thread": "nvmf_tgt_poll_group_000", 00:21:34.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:34.668 "listen_address": { 00:21:34.668 "trtype": "TCP", 00:21:34.668 "adrfam": "IPv4", 00:21:34.668 "traddr": "10.0.0.2", 00:21:34.668 "trsvcid": "4420" 00:21:34.668 }, 00:21:34.668 "peer_address": { 00:21:34.668 "trtype": "TCP", 00:21:34.668 "adrfam": "IPv4", 00:21:34.668 "traddr": "10.0.0.1", 00:21:34.668 "trsvcid": "42542" 00:21:34.668 }, 00:21:34.668 "auth": { 00:21:34.668 "state": "completed", 00:21:34.668 "digest": "sha384", 00:21:34.668 "dhgroup": "ffdhe3072" 00:21:34.668 } 00:21:34.668 } 00:21:34.668 ]' 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.668 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.929 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:34.929 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:35.499 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.500 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.761 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.023 00:21:36.023 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.023 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.023 22:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.284 { 00:21:36.284 "cntlid": 67, 00:21:36.284 "qid": 0, 00:21:36.284 "state": "enabled", 00:21:36.284 "thread": "nvmf_tgt_poll_group_000", 00:21:36.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:36.284 "listen_address": { 00:21:36.284 "trtype": "TCP", 00:21:36.284 "adrfam": "IPv4", 00:21:36.284 "traddr": "10.0.0.2", 00:21:36.284 "trsvcid": "4420" 00:21:36.284 }, 00:21:36.284 "peer_address": { 00:21:36.284 "trtype": "TCP", 00:21:36.284 "adrfam": "IPv4", 00:21:36.284 "traddr": "10.0.0.1", 00:21:36.284 "trsvcid": "42574" 00:21:36.284 }, 00:21:36.284 "auth": { 00:21:36.284 "state": "completed", 00:21:36.284 "digest": "sha384", 00:21:36.284 "dhgroup": "ffdhe3072" 00:21:36.284 } 00:21:36.284 } 00:21:36.284 ]' 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.284 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.285 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.285 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.285 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.285 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.545 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:36.545 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:37.118 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.378 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.639 00:21:37.639 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.639 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.639 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.900 { 00:21:37.900 "cntlid": 69, 00:21:37.900 "qid": 0, 00:21:37.900 "state": "enabled", 00:21:37.900 "thread": "nvmf_tgt_poll_group_000", 00:21:37.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.900 "listen_address": { 00:21:37.900 "trtype": "TCP", 00:21:37.900 "adrfam": "IPv4", 00:21:37.900 "traddr": "10.0.0.2", 00:21:37.900 "trsvcid": "4420" 00:21:37.900 }, 00:21:37.900 "peer_address": { 00:21:37.900 "trtype": "TCP", 00:21:37.900 "adrfam": "IPv4", 00:21:37.900 "traddr": "10.0.0.1", 00:21:37.900 "trsvcid": "42598" 00:21:37.900 }, 00:21:37.900 "auth": { 00:21:37.900 "state": "completed", 00:21:37.900 "digest": "sha384", 00:21:37.900 "dhgroup": "ffdhe3072" 00:21:37.900 } 00:21:37.900 } 00:21:37.900 ]' 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.900 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.161 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:38.161 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:38.732 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:38.733 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.993 22:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.299 00:21:39.299 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.299 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.299 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.599 { 00:21:39.599 "cntlid": 71, 00:21:39.599 "qid": 0, 00:21:39.599 "state": "enabled", 00:21:39.599 "thread": "nvmf_tgt_poll_group_000", 00:21:39.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:39.599 "listen_address": { 00:21:39.599 "trtype": "TCP", 00:21:39.599 "adrfam": "IPv4", 00:21:39.599 "traddr": "10.0.0.2", 00:21:39.599 "trsvcid": "4420" 00:21:39.599 }, 00:21:39.599 "peer_address": { 00:21:39.599 "trtype": "TCP", 00:21:39.599 "adrfam": "IPv4", 00:21:39.599 "traddr": "10.0.0.1", 00:21:39.599 "trsvcid": "42630" 00:21:39.599 }, 00:21:39.599 "auth": { 00:21:39.599 "state": "completed", 00:21:39.599 "digest": "sha384", 00:21:39.599 "dhgroup": "ffdhe3072" 00:21:39.599 } 00:21:39.599 } 00:21:39.599 ]' 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.599 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.872 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:39.872 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.444 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.705 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.705 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.967 { 00:21:40.967 "cntlid": 73, 00:21:40.967 "qid": 0, 00:21:40.967 "state": "enabled", 00:21:40.967 "thread": "nvmf_tgt_poll_group_000", 00:21:40.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.967 "listen_address": { 00:21:40.967 "trtype": "TCP", 00:21:40.967 "adrfam": "IPv4", 00:21:40.967 "traddr": "10.0.0.2", 00:21:40.967 "trsvcid": "4420" 00:21:40.967 }, 00:21:40.967 "peer_address": { 00:21:40.967 "trtype": "TCP", 00:21:40.967 "adrfam": "IPv4", 00:21:40.967 "traddr": "10.0.0.1", 00:21:40.967 "trsvcid": "42652" 00:21:40.967 }, 00:21:40.967 "auth": { 00:21:40.967 "state": "completed", 00:21:40.967 "digest": "sha384", 00:21:40.967 "dhgroup": "ffdhe4096" 00:21:40.967 } 00:21:40.967 } 00:21:40.967 ]' 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.967 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.227 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.227 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.227 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.227 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.227 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.487 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:41.487 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.060 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.321 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.582 00:21:42.582 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.583 { 00:21:42.583 "cntlid": 75, 00:21:42.583 "qid": 0, 00:21:42.583 "state": "enabled", 00:21:42.583 "thread": "nvmf_tgt_poll_group_000", 00:21:42.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.583 "listen_address": { 00:21:42.583 "trtype": "TCP", 00:21:42.583 "adrfam": "IPv4", 00:21:42.583 "traddr": "10.0.0.2", 00:21:42.583 "trsvcid": "4420" 00:21:42.583 }, 00:21:42.583 "peer_address": { 00:21:42.583 "trtype": "TCP", 00:21:42.583 "adrfam": "IPv4", 00:21:42.583 "traddr": "10.0.0.1", 00:21:42.583 "trsvcid": "41564" 00:21:42.583 }, 00:21:42.583 "auth": { 00:21:42.583 "state": "completed", 00:21:42.583 "digest": "sha384", 00:21:42.583 "dhgroup": "ffdhe4096" 00:21:42.583 } 00:21:42.583 } 00:21:42.583 ]' 00:21:42.583 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.842 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.103 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:43.103 22:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:43.673 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.933 22:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.193 00:21:44.193 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.193 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.193 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.453 { 00:21:44.453 "cntlid": 77, 00:21:44.453 "qid": 0, 00:21:44.453 "state": "enabled", 00:21:44.453 "thread": "nvmf_tgt_poll_group_000", 00:21:44.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:44.453 "listen_address": { 00:21:44.453 "trtype": "TCP", 00:21:44.453 "adrfam": "IPv4", 00:21:44.453 "traddr": "10.0.0.2", 00:21:44.453 "trsvcid": "4420" 00:21:44.453 }, 00:21:44.453 "peer_address": { 00:21:44.453 "trtype": "TCP", 00:21:44.453 "adrfam": "IPv4", 00:21:44.453 "traddr": "10.0.0.1", 00:21:44.453 "trsvcid": "41584" 00:21:44.453 }, 00:21:44.453 "auth": { 00:21:44.453 "state": "completed", 00:21:44.453 "digest": "sha384", 00:21:44.453 "dhgroup": "ffdhe4096" 00:21:44.453 } 00:21:44.453 } 00:21:44.453 ]' 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.453 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.714 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:44.714 22:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:45.284 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.285 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.545 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.806 00:21:45.806 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.806 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.806 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.066 { 00:21:46.066 "cntlid": 79, 00:21:46.066 "qid": 0, 00:21:46.066 "state": "enabled", 00:21:46.066 "thread": "nvmf_tgt_poll_group_000", 00:21:46.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.066 "listen_address": { 00:21:46.066 "trtype": "TCP", 00:21:46.066 "adrfam": "IPv4", 00:21:46.066 "traddr": "10.0.0.2", 00:21:46.066 "trsvcid": "4420" 00:21:46.066 }, 00:21:46.066 "peer_address": { 00:21:46.066 "trtype": "TCP", 00:21:46.066 "adrfam": "IPv4", 00:21:46.066 "traddr": "10.0.0.1", 00:21:46.066 "trsvcid": "41608" 00:21:46.066 }, 00:21:46.066 "auth": { 00:21:46.066 "state": "completed", 00:21:46.066 "digest": "sha384", 00:21:46.066 "dhgroup": "ffdhe4096" 00:21:46.066 } 00:21:46.066 } 00:21:46.066 ]' 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.066 22:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.066 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.066 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.066 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.326 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:46.326 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.895 22:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.155 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.415 00:21:47.415 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.415 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.415 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.675 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.675 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.675 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.676 { 00:21:47.676 "cntlid": 81, 00:21:47.676 "qid": 0, 00:21:47.676 "state": "enabled", 00:21:47.676 "thread": "nvmf_tgt_poll_group_000", 00:21:47.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.676 "listen_address": { 00:21:47.676 "trtype": "TCP", 00:21:47.676 "adrfam": "IPv4", 00:21:47.676 "traddr": "10.0.0.2", 00:21:47.676 "trsvcid": "4420" 00:21:47.676 }, 00:21:47.676 "peer_address": { 00:21:47.676 "trtype": "TCP", 00:21:47.676 "adrfam": "IPv4", 00:21:47.676 "traddr": "10.0.0.1", 00:21:47.676 "trsvcid": "41626" 00:21:47.676 }, 00:21:47.676 "auth": { 00:21:47.676 "state": "completed", 00:21:47.676 "digest": "sha384", 00:21:47.676 "dhgroup": "ffdhe6144" 00:21:47.676 } 00:21:47.676 } 00:21:47.676 ]' 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.676 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.936 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.936 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.936 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.936 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:47.936 22:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.876 22:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.136 00:21:49.136 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.136 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.136 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.396 { 00:21:49.396 "cntlid": 83, 00:21:49.396 "qid": 0, 00:21:49.396 "state": "enabled", 00:21:49.396 "thread": "nvmf_tgt_poll_group_000", 00:21:49.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.396 "listen_address": { 00:21:49.396 "trtype": "TCP", 00:21:49.396 "adrfam": "IPv4", 00:21:49.396 "traddr": "10.0.0.2", 00:21:49.396 "trsvcid": "4420" 00:21:49.396 }, 00:21:49.396 "peer_address": { 00:21:49.396 "trtype": "TCP", 00:21:49.396 "adrfam": "IPv4", 00:21:49.396 "traddr": "10.0.0.1", 00:21:49.396 "trsvcid": "41654" 00:21:49.396 }, 00:21:49.396 "auth": { 00:21:49.396 "state": "completed", 00:21:49.396 "digest": "sha384", 00:21:49.396 "dhgroup": "ffdhe6144" 00:21:49.396 } 00:21:49.396 } 00:21:49.396 ]' 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.396 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.657 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.657 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.657 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.657 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:49.657 22:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:50.226 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.486 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.057 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.057 22:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.057 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.057 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.057 { 00:21:51.057 "cntlid": 85, 00:21:51.057 "qid": 0, 00:21:51.057 "state": "enabled", 00:21:51.057 "thread": "nvmf_tgt_poll_group_000", 00:21:51.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:51.057 "listen_address": { 00:21:51.057 "trtype": "TCP", 00:21:51.057 "adrfam": "IPv4", 00:21:51.057 "traddr": "10.0.0.2", 00:21:51.057 "trsvcid": "4420" 00:21:51.057 }, 00:21:51.057 "peer_address": { 00:21:51.057 "trtype": "TCP", 00:21:51.057 "adrfam": "IPv4", 00:21:51.057 "traddr": "10.0.0.1", 00:21:51.057 "trsvcid": "41684" 00:21:51.057 }, 00:21:51.057 "auth": { 00:21:51.057 "state": "completed", 00:21:51.057 "digest": "sha384", 00:21:51.057 "dhgroup": "ffdhe6144" 00:21:51.057 } 00:21:51.057 } 00:21:51.057 ]' 00:21:51.057 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.057 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.057 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:51.317 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.257 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.257 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.516 00:21:52.516 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.516 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.516 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.777 { 00:21:52.777 "cntlid": 87, 00:21:52.777 "qid": 0, 00:21:52.777 "state": "enabled", 00:21:52.777 "thread": "nvmf_tgt_poll_group_000", 00:21:52.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.777 "listen_address": { 00:21:52.777 "trtype": "TCP", 00:21:52.777 "adrfam": "IPv4", 00:21:52.777 "traddr": "10.0.0.2", 00:21:52.777 "trsvcid": "4420" 00:21:52.777 }, 00:21:52.777 "peer_address": { 00:21:52.777 "trtype": "TCP", 00:21:52.777 "adrfam": "IPv4", 00:21:52.777 "traddr": "10.0.0.1", 00:21:52.777 "trsvcid": "42920" 00:21:52.777 }, 00:21:52.777 "auth": { 00:21:52.777 "state": "completed", 00:21:52.777 "digest": "sha384", 00:21:52.777 "dhgroup": "ffdhe6144" 00:21:52.777 } 00:21:52.777 } 00:21:52.777 ]' 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.777 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.038 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.038 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.038 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.038 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:53.038 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:21:53.609 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.870 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.442 00:21:54.442 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.442 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.442 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.702 { 00:21:54.702 "cntlid": 89, 00:21:54.702 "qid": 0, 00:21:54.702 "state": "enabled", 00:21:54.702 "thread": "nvmf_tgt_poll_group_000", 00:21:54.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.702 "listen_address": { 00:21:54.702 "trtype": "TCP", 00:21:54.702 "adrfam": "IPv4", 00:21:54.702 "traddr": "10.0.0.2", 00:21:54.702 "trsvcid": "4420" 00:21:54.702 }, 00:21:54.702 "peer_address": { 00:21:54.702 "trtype": "TCP", 00:21:54.702 "adrfam": "IPv4", 00:21:54.702 "traddr": "10.0.0.1", 00:21:54.702 "trsvcid": "42936" 00:21:54.702 }, 00:21:54.702 "auth": { 00:21:54.702 "state": "completed", 00:21:54.702 "digest": "sha384", 00:21:54.702 "dhgroup": "ffdhe8192" 00:21:54.702 } 00:21:54.702 } 00:21:54.702 ]' 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.702 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.963 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:54.963 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:55.533 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:55.793 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:55.793 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.793 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.793 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.794 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.371 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.371 { 00:21:56.371 "cntlid": 91, 00:21:56.371 "qid": 0, 00:21:56.371 "state": "enabled", 00:21:56.371 "thread": "nvmf_tgt_poll_group_000", 00:21:56.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.371 "listen_address": { 00:21:56.371 "trtype": "TCP", 00:21:56.371 "adrfam": "IPv4", 00:21:56.371 "traddr": "10.0.0.2", 00:21:56.371 "trsvcid": "4420" 00:21:56.371 }, 00:21:56.371 "peer_address": { 00:21:56.371 "trtype": "TCP", 00:21:56.371 "adrfam": "IPv4", 00:21:56.371 "traddr": "10.0.0.1", 00:21:56.371 "trsvcid": "42978" 00:21:56.371 }, 00:21:56.371 "auth": { 00:21:56.371 "state": "completed", 00:21:56.371 "digest": "sha384", 00:21:56.371 "dhgroup": "ffdhe8192" 00:21:56.371 } 00:21:56.371 } 00:21:56.371 ]' 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.371 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.631 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.631 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.631 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.631 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:56.631 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.571 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.141 00:21:58.141 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.141 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.141 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.141 { 00:21:58.141 "cntlid": 93, 00:21:58.141 "qid": 0, 00:21:58.141 "state": "enabled", 00:21:58.141 "thread": "nvmf_tgt_poll_group_000", 00:21:58.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:58.141 "listen_address": { 00:21:58.141 "trtype": "TCP", 00:21:58.141 "adrfam": "IPv4", 00:21:58.141 "traddr": "10.0.0.2", 00:21:58.141 "trsvcid": "4420" 00:21:58.141 }, 00:21:58.141 "peer_address": { 00:21:58.141 "trtype": "TCP", 00:21:58.141 "adrfam": "IPv4", 00:21:58.141 "traddr": "10.0.0.1", 00:21:58.141 "trsvcid": "43002" 00:21:58.141 }, 00:21:58.141 "auth": { 00:21:58.141 "state": "completed", 00:21:58.141 "digest": "sha384", 00:21:58.141 "dhgroup": "ffdhe8192" 00:21:58.141 } 00:21:58.141 } 00:21:58.141 ]' 00:21:58.141 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.400 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.661 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:58.661 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.231 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.491 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.065 00:22:00.065 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.065 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.065 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.065 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.065 22:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.065 { 00:22:00.065 "cntlid": 95, 00:22:00.065 "qid": 0, 00:22:00.065 "state": "enabled", 00:22:00.065 "thread": "nvmf_tgt_poll_group_000", 00:22:00.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:00.065 "listen_address": { 00:22:00.065 "trtype": "TCP", 00:22:00.065 "adrfam": "IPv4", 00:22:00.065 "traddr": "10.0.0.2", 00:22:00.065 "trsvcid": "4420" 00:22:00.065 }, 00:22:00.065 "peer_address": { 00:22:00.065 "trtype": "TCP", 00:22:00.065 "adrfam": "IPv4", 00:22:00.065 "traddr": "10.0.0.1", 00:22:00.065 "trsvcid": "43026" 00:22:00.065 }, 00:22:00.065 "auth": { 00:22:00.065 "state": "completed", 00:22:00.065 "digest": "sha384", 00:22:00.065 "dhgroup": "ffdhe8192" 00:22:00.065 } 00:22:00.065 } 00:22:00.065 ]' 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.065 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:00.325 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.269 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.269 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.270 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.270 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.530 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.530 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.792 { 00:22:01.792 "cntlid": 97, 00:22:01.792 "qid": 0, 00:22:01.792 "state": "enabled", 00:22:01.792 "thread": "nvmf_tgt_poll_group_000", 00:22:01.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:01.792 "listen_address": { 00:22:01.792 "trtype": "TCP", 00:22:01.792 "adrfam": "IPv4", 00:22:01.792 "traddr": "10.0.0.2", 00:22:01.792 "trsvcid": "4420" 00:22:01.792 }, 00:22:01.792 "peer_address": { 00:22:01.792 "trtype": "TCP", 00:22:01.792 "adrfam": "IPv4", 00:22:01.792 "traddr": "10.0.0.1", 00:22:01.792 "trsvcid": "43034" 00:22:01.792 }, 00:22:01.792 "auth": { 00:22:01.792 "state": "completed", 00:22:01.792 "digest": "sha512", 00:22:01.792 "dhgroup": "null" 00:22:01.792 } 00:22:01.792 } 00:22:01.792 ]' 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.792 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.053 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:02.053 22:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:02.625 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.886 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.147 00:22:03.147 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.147 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.147 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.147 { 00:22:03.147 "cntlid": 99, 00:22:03.147 "qid": 0, 00:22:03.147 "state": "enabled", 00:22:03.147 "thread": "nvmf_tgt_poll_group_000", 00:22:03.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:03.147 "listen_address": { 00:22:03.147 "trtype": "TCP", 00:22:03.147 "adrfam": "IPv4", 00:22:03.147 "traddr": "10.0.0.2", 00:22:03.147 "trsvcid": "4420" 00:22:03.147 }, 00:22:03.147 "peer_address": { 00:22:03.147 "trtype": "TCP", 00:22:03.147 "adrfam": "IPv4", 00:22:03.147 "traddr": "10.0.0.1", 00:22:03.147 "trsvcid": "54940" 00:22:03.147 }, 00:22:03.147 "auth": { 00:22:03.147 "state": "completed", 00:22:03.147 "digest": "sha512", 00:22:03.147 "dhgroup": "null" 00:22:03.147 } 00:22:03.147 } 00:22:03.147 ]' 00:22:03.147 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.408 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.670 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:03.670 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:04.241 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.502 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.503 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.503 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.764 { 00:22:04.764 "cntlid": 101, 00:22:04.764 "qid": 0, 00:22:04.764 "state": "enabled", 00:22:04.764 "thread": "nvmf_tgt_poll_group_000", 00:22:04.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.764 "listen_address": { 00:22:04.764 "trtype": "TCP", 00:22:04.764 "adrfam": "IPv4", 00:22:04.764 "traddr": "10.0.0.2", 00:22:04.764 "trsvcid": "4420" 00:22:04.764 }, 00:22:04.764 "peer_address": { 00:22:04.764 "trtype": "TCP", 00:22:04.764 "adrfam": "IPv4", 00:22:04.764 "traddr": "10.0.0.1", 00:22:04.764 "trsvcid": "54982" 00:22:04.764 }, 00:22:04.764 "auth": { 00:22:04.764 "state": "completed", 00:22:04.764 "digest": "sha512", 00:22:04.764 "dhgroup": "null" 00:22:04.764 } 00:22:04.764 } 00:22:04.764 ]' 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.764 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.025 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:05.025 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.025 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.025 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.025 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.025 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:05.025 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.968 22:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.229 00:22:06.229 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.229 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.229 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.490 { 00:22:06.490 "cntlid": 103, 00:22:06.490 "qid": 0, 00:22:06.490 "state": "enabled", 00:22:06.490 "thread": "nvmf_tgt_poll_group_000", 00:22:06.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.490 "listen_address": { 00:22:06.490 "trtype": "TCP", 00:22:06.490 "adrfam": "IPv4", 00:22:06.490 "traddr": "10.0.0.2", 00:22:06.490 "trsvcid": "4420" 00:22:06.490 }, 00:22:06.490 "peer_address": { 00:22:06.490 "trtype": "TCP", 00:22:06.490 "adrfam": "IPv4", 00:22:06.490 "traddr": "10.0.0.1", 00:22:06.490 "trsvcid": "55006" 00:22:06.490 }, 00:22:06.490 "auth": { 00:22:06.490 "state": "completed", 00:22:06.490 "digest": "sha512", 00:22:06.490 "dhgroup": "null" 00:22:06.490 } 00:22:06.490 } 00:22:06.490 ]' 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.490 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.751 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:06.751 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.322 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.583 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.844 00:22:07.844 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.844 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.844 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.106 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.106 { 00:22:08.106 "cntlid": 105, 00:22:08.106 "qid": 0, 00:22:08.106 "state": "enabled", 00:22:08.106 "thread": "nvmf_tgt_poll_group_000", 00:22:08.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:08.106 "listen_address": { 00:22:08.106 "trtype": "TCP", 00:22:08.106 "adrfam": "IPv4", 00:22:08.106 "traddr": "10.0.0.2", 00:22:08.106 "trsvcid": "4420" 00:22:08.106 }, 00:22:08.106 "peer_address": { 00:22:08.106 "trtype": "TCP", 00:22:08.106 "adrfam": "IPv4", 00:22:08.106 "traddr": "10.0.0.1", 00:22:08.106 "trsvcid": "55036" 00:22:08.106 }, 00:22:08.106 "auth": { 00:22:08.107 "state": "completed", 00:22:08.107 "digest": "sha512", 00:22:08.107 "dhgroup": "ffdhe2048" 00:22:08.107 } 00:22:08.107 } 00:22:08.107 ]' 00:22:08.107 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.107 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.107 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.107 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:08.107 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.107 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.107 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.107 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.368 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:08.368 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.939 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.462 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.462 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.462 { 00:22:09.462 "cntlid": 107, 00:22:09.462 "qid": 0, 00:22:09.462 "state": "enabled", 00:22:09.462 "thread": "nvmf_tgt_poll_group_000", 00:22:09.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.462 "listen_address": { 00:22:09.462 "trtype": "TCP", 00:22:09.462 "adrfam": "IPv4", 00:22:09.462 "traddr": "10.0.0.2", 00:22:09.462 "trsvcid": "4420" 00:22:09.462 }, 00:22:09.463 "peer_address": { 00:22:09.463 "trtype": "TCP", 00:22:09.463 "adrfam": "IPv4", 00:22:09.463 "traddr": "10.0.0.1", 00:22:09.463 "trsvcid": "55072" 00:22:09.463 }, 00:22:09.463 "auth": { 00:22:09.463 "state": "completed", 00:22:09.463 "digest": "sha512", 00:22:09.463 "dhgroup": "ffdhe2048" 00:22:09.463 } 00:22:09.463 } 00:22:09.463 ]' 00:22:09.463 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.724 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.985 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:09.985 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:10.557 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.818 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.079 00:22:11.079 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.079 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.079 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.342 { 00:22:11.342 "cntlid": 109, 00:22:11.342 "qid": 0, 00:22:11.342 "state": "enabled", 00:22:11.342 "thread": "nvmf_tgt_poll_group_000", 00:22:11.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.342 "listen_address": { 00:22:11.342 "trtype": "TCP", 00:22:11.342 "adrfam": "IPv4", 00:22:11.342 "traddr": "10.0.0.2", 00:22:11.342 "trsvcid": "4420" 00:22:11.342 }, 00:22:11.342 "peer_address": { 00:22:11.342 "trtype": "TCP", 00:22:11.342 "adrfam": "IPv4", 00:22:11.342 "traddr": "10.0.0.1", 00:22:11.342 "trsvcid": "55096" 00:22:11.342 }, 00:22:11.342 "auth": { 00:22:11.342 "state": "completed", 00:22:11.342 "digest": "sha512", 00:22:11.342 "dhgroup": "ffdhe2048" 00:22:11.342 } 00:22:11.342 } 00:22:11.342 ]' 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.342 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.603 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:11.603 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.177 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.439 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.701 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.701 { 00:22:12.701 "cntlid": 111, 00:22:12.701 "qid": 0, 00:22:12.701 "state": "enabled", 00:22:12.701 "thread": "nvmf_tgt_poll_group_000", 00:22:12.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:12.701 "listen_address": { 00:22:12.701 "trtype": "TCP", 00:22:12.701 "adrfam": "IPv4", 00:22:12.701 "traddr": "10.0.0.2", 00:22:12.701 "trsvcid": "4420" 00:22:12.701 }, 00:22:12.701 "peer_address": { 00:22:12.701 "trtype": "TCP", 00:22:12.701 "adrfam": "IPv4", 00:22:12.701 "traddr": "10.0.0.1", 00:22:12.701 "trsvcid": "38776" 00:22:12.701 }, 00:22:12.701 "auth": { 00:22:12.701 "state": "completed", 00:22:12.701 "digest": "sha512", 00:22:12.701 "dhgroup": "ffdhe2048" 00:22:12.701 } 00:22:12.701 } 00:22:12.701 ]' 00:22:12.701 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.961 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.222 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:13.222 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.793 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.053 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.313 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.313 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.574 { 00:22:14.574 "cntlid": 113, 00:22:14.574 "qid": 0, 00:22:14.574 "state": "enabled", 00:22:14.574 "thread": "nvmf_tgt_poll_group_000", 00:22:14.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.574 "listen_address": { 00:22:14.574 "trtype": "TCP", 00:22:14.574 "adrfam": "IPv4", 00:22:14.574 "traddr": "10.0.0.2", 00:22:14.574 "trsvcid": "4420" 00:22:14.574 }, 00:22:14.574 "peer_address": { 00:22:14.574 "trtype": "TCP", 00:22:14.574 "adrfam": "IPv4", 00:22:14.574 "traddr": "10.0.0.1", 00:22:14.574 "trsvcid": "38804" 00:22:14.574 }, 00:22:14.574 "auth": { 00:22:14.574 "state": "completed", 00:22:14.574 "digest": "sha512", 00:22:14.574 "dhgroup": "ffdhe3072" 00:22:14.574 } 00:22:14.574 } 00:22:14.574 ]' 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.574 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.834 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:14.834 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.406 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:15.665 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.666 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.926 00:22:15.926 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.926 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.926 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.186 { 00:22:16.186 "cntlid": 115, 00:22:16.186 "qid": 0, 00:22:16.186 "state": "enabled", 00:22:16.186 "thread": "nvmf_tgt_poll_group_000", 00:22:16.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:16.186 "listen_address": { 00:22:16.186 "trtype": "TCP", 00:22:16.186 "adrfam": "IPv4", 00:22:16.186 "traddr": "10.0.0.2", 00:22:16.186 "trsvcid": "4420" 00:22:16.186 }, 00:22:16.186 "peer_address": { 00:22:16.186 "trtype": "TCP", 00:22:16.186 "adrfam": "IPv4", 00:22:16.186 "traddr": "10.0.0.1", 00:22:16.186 "trsvcid": "38840" 00:22:16.186 }, 00:22:16.186 "auth": { 00:22:16.186 "state": "completed", 00:22:16.186 "digest": "sha512", 00:22:16.186 "dhgroup": "ffdhe3072" 00:22:16.186 } 00:22:16.186 } 00:22:16.186 ]' 00:22:16.186 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.186 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.446 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:16.446 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.058 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.403 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.663 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.663 { 00:22:17.663 "cntlid": 117, 00:22:17.663 "qid": 0, 00:22:17.663 "state": "enabled", 00:22:17.663 "thread": "nvmf_tgt_poll_group_000", 00:22:17.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:17.663 "listen_address": { 00:22:17.663 "trtype": "TCP", 00:22:17.663 "adrfam": "IPv4", 00:22:17.663 "traddr": "10.0.0.2", 00:22:17.663 "trsvcid": "4420" 00:22:17.663 }, 00:22:17.663 "peer_address": { 00:22:17.663 "trtype": "TCP", 00:22:17.663 "adrfam": "IPv4", 00:22:17.663 "traddr": "10.0.0.1", 00:22:17.663 "trsvcid": "38872" 00:22:17.663 }, 00:22:17.663 "auth": { 00:22:17.663 "state": "completed", 00:22:17.663 "digest": "sha512", 00:22:17.663 "dhgroup": "ffdhe3072" 00:22:17.663 } 00:22:17.663 } 00:22:17.663 ]' 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.663 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.924 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:17.924 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.924 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.924 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.924 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.184 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:18.184 22:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:18.753 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:18.754 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.015 22:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.015 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.276 { 00:22:19.276 "cntlid": 119, 00:22:19.276 "qid": 0, 00:22:19.276 "state": "enabled", 00:22:19.276 "thread": "nvmf_tgt_poll_group_000", 00:22:19.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.276 "listen_address": { 00:22:19.276 "trtype": "TCP", 00:22:19.276 "adrfam": "IPv4", 00:22:19.276 "traddr": "10.0.0.2", 00:22:19.276 "trsvcid": "4420" 00:22:19.276 }, 00:22:19.276 "peer_address": { 00:22:19.276 "trtype": "TCP", 00:22:19.276 "adrfam": "IPv4", 00:22:19.276 "traddr": "10.0.0.1", 00:22:19.276 "trsvcid": "38902" 00:22:19.276 }, 00:22:19.276 "auth": { 00:22:19.276 "state": "completed", 00:22:19.276 "digest": "sha512", 00:22:19.276 "dhgroup": "ffdhe3072" 00:22:19.276 } 00:22:19.276 } 00:22:19.276 ]' 00:22:19.276 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.536 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.796 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:19.797 22:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:20.367 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.627 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.887 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.887 { 00:22:20.887 "cntlid": 121, 00:22:20.887 "qid": 0, 00:22:20.887 "state": "enabled", 00:22:20.887 "thread": "nvmf_tgt_poll_group_000", 00:22:20.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.887 "listen_address": { 00:22:20.887 "trtype": "TCP", 00:22:20.887 "adrfam": "IPv4", 00:22:20.887 "traddr": "10.0.0.2", 00:22:20.887 "trsvcid": "4420" 00:22:20.887 }, 00:22:20.887 "peer_address": { 00:22:20.887 "trtype": "TCP", 00:22:20.887 "adrfam": "IPv4", 00:22:20.887 "traddr": "10.0.0.1", 00:22:20.887 "trsvcid": "38918" 00:22:20.887 }, 00:22:20.887 "auth": { 00:22:20.887 "state": "completed", 00:22:20.887 "digest": "sha512", 00:22:20.887 "dhgroup": "ffdhe4096" 00:22:20.887 } 00:22:20.887 } 00:22:20.887 ]' 00:22:20.887 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.147 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.147 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.147 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.147 22:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.147 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.147 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.147 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.406 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:21.406 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.977 22:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.237 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.497 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.497 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.756 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.756 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.757 { 00:22:22.757 "cntlid": 123, 00:22:22.757 "qid": 0, 00:22:22.757 "state": "enabled", 00:22:22.757 "thread": "nvmf_tgt_poll_group_000", 00:22:22.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.757 "listen_address": { 00:22:22.757 "trtype": "TCP", 00:22:22.757 "adrfam": "IPv4", 00:22:22.757 "traddr": "10.0.0.2", 00:22:22.757 "trsvcid": "4420" 00:22:22.757 }, 00:22:22.757 "peer_address": { 00:22:22.757 "trtype": "TCP", 00:22:22.757 "adrfam": "IPv4", 00:22:22.757 "traddr": "10.0.0.1", 00:22:22.757 "trsvcid": "40500" 00:22:22.757 }, 00:22:22.757 "auth": { 00:22:22.757 "state": "completed", 00:22:22.757 "digest": "sha512", 00:22:22.757 "dhgroup": "ffdhe4096" 00:22:22.757 } 00:22:22.757 } 00:22:22.757 ]' 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.757 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.016 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:23.016 22:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.587 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.847 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.109 00:22:24.109 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.109 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.109 22:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.109 { 00:22:24.109 "cntlid": 125, 00:22:24.109 "qid": 0, 00:22:24.109 "state": "enabled", 00:22:24.109 "thread": "nvmf_tgt_poll_group_000", 00:22:24.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:24.109 "listen_address": { 00:22:24.109 "trtype": "TCP", 00:22:24.109 "adrfam": "IPv4", 00:22:24.109 "traddr": "10.0.0.2", 00:22:24.109 "trsvcid": "4420" 00:22:24.109 }, 00:22:24.109 "peer_address": { 00:22:24.109 "trtype": "TCP", 00:22:24.109 "adrfam": "IPv4", 00:22:24.109 "traddr": "10.0.0.1", 00:22:24.109 "trsvcid": "40536" 00:22:24.109 }, 00:22:24.109 "auth": { 00:22:24.109 "state": "completed", 00:22:24.109 "digest": "sha512", 00:22:24.109 "dhgroup": "ffdhe4096" 00:22:24.109 } 00:22:24.109 } 00:22:24.109 ]' 00:22:24.109 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.371 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.631 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:24.631 22:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.204 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.464 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.465 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.726 00:22:25.726 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.726 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.726 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.726 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.987 { 00:22:25.987 "cntlid": 127, 00:22:25.987 "qid": 0, 00:22:25.987 "state": "enabled", 00:22:25.987 "thread": "nvmf_tgt_poll_group_000", 00:22:25.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.987 "listen_address": { 00:22:25.987 "trtype": "TCP", 00:22:25.987 "adrfam": "IPv4", 00:22:25.987 "traddr": "10.0.0.2", 00:22:25.987 "trsvcid": "4420" 00:22:25.987 }, 00:22:25.987 "peer_address": { 00:22:25.987 "trtype": "TCP", 00:22:25.987 "adrfam": "IPv4", 00:22:25.987 "traddr": "10.0.0.1", 00:22:25.987 "trsvcid": "40566" 00:22:25.987 }, 00:22:25.987 "auth": { 00:22:25.987 "state": "completed", 00:22:25.987 "digest": "sha512", 00:22:25.987 "dhgroup": "ffdhe4096" 00:22:25.987 } 00:22:25.987 } 00:22:25.987 ]' 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.987 22:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.248 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:26.248 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.819 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:27.080 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.081 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.340 00:22:27.340 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.340 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.340 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.601 { 00:22:27.601 "cntlid": 129, 00:22:27.601 "qid": 0, 00:22:27.601 "state": "enabled", 00:22:27.601 "thread": "nvmf_tgt_poll_group_000", 00:22:27.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.601 "listen_address": { 00:22:27.601 "trtype": "TCP", 00:22:27.601 "adrfam": "IPv4", 00:22:27.601 "traddr": "10.0.0.2", 00:22:27.601 "trsvcid": "4420" 00:22:27.601 }, 00:22:27.601 "peer_address": { 00:22:27.601 "trtype": "TCP", 00:22:27.601 "adrfam": "IPv4", 00:22:27.601 "traddr": "10.0.0.1", 00:22:27.601 "trsvcid": "40606" 00:22:27.601 }, 00:22:27.601 "auth": { 00:22:27.601 "state": "completed", 00:22:27.601 "digest": "sha512", 00:22:27.601 "dhgroup": "ffdhe6144" 00:22:27.601 } 00:22:27.601 } 00:22:27.601 ]' 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.601 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.863 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:27.863 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.805 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.067 00:22:29.067 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.067 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.067 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.328 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.329 { 00:22:29.329 "cntlid": 131, 00:22:29.329 "qid": 0, 00:22:29.329 "state": "enabled", 00:22:29.329 "thread": "nvmf_tgt_poll_group_000", 00:22:29.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:29.329 "listen_address": { 00:22:29.329 "trtype": "TCP", 00:22:29.329 "adrfam": "IPv4", 00:22:29.329 "traddr": "10.0.0.2", 00:22:29.329 "trsvcid": "4420" 00:22:29.329 }, 00:22:29.329 "peer_address": { 00:22:29.329 "trtype": "TCP", 00:22:29.329 "adrfam": "IPv4", 00:22:29.329 "traddr": "10.0.0.1", 00:22:29.329 "trsvcid": "40638" 00:22:29.329 }, 00:22:29.329 "auth": { 00:22:29.329 "state": "completed", 00:22:29.329 "digest": "sha512", 00:22:29.329 "dhgroup": "ffdhe6144" 00:22:29.329 } 00:22:29.329 } 00:22:29.329 ]' 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.329 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.589 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:29.589 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:30.159 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.421 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.993 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.993 { 00:22:30.993 "cntlid": 133, 00:22:30.993 "qid": 0, 00:22:30.993 "state": "enabled", 00:22:30.993 "thread": "nvmf_tgt_poll_group_000", 00:22:30.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.993 "listen_address": { 00:22:30.993 "trtype": "TCP", 00:22:30.993 "adrfam": "IPv4", 00:22:30.993 "traddr": "10.0.0.2", 00:22:30.993 "trsvcid": "4420" 00:22:30.993 }, 00:22:30.993 "peer_address": { 00:22:30.993 "trtype": "TCP", 00:22:30.993 "adrfam": "IPv4", 00:22:30.993 "traddr": "10.0.0.1", 00:22:30.993 "trsvcid": "40666" 00:22:30.993 }, 00:22:30.993 "auth": { 00:22:30.993 "state": "completed", 00:22:30.993 "digest": "sha512", 00:22:30.993 "dhgroup": "ffdhe6144" 00:22:30.993 } 00:22:30.993 } 00:22:30.993 ]' 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.993 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:31.255 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.208 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.208 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.473 00:22:32.473 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.473 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.473 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.734 { 00:22:32.734 "cntlid": 135, 00:22:32.734 "qid": 0, 00:22:32.734 "state": "enabled", 00:22:32.734 "thread": "nvmf_tgt_poll_group_000", 00:22:32.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:32.734 "listen_address": { 00:22:32.734 "trtype": "TCP", 00:22:32.734 "adrfam": "IPv4", 00:22:32.734 "traddr": "10.0.0.2", 00:22:32.734 "trsvcid": "4420" 00:22:32.734 }, 00:22:32.734 "peer_address": { 00:22:32.734 "trtype": "TCP", 00:22:32.734 "adrfam": "IPv4", 00:22:32.734 "traddr": "10.0.0.1", 00:22:32.734 "trsvcid": "40708" 00:22:32.734 }, 00:22:32.734 "auth": { 00:22:32.734 "state": "completed", 00:22:32.734 "digest": "sha512", 00:22:32.734 "dhgroup": "ffdhe6144" 00:22:32.734 } 00:22:32.734 } 00:22:32.734 ]' 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.734 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:32.994 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.935 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.507 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.507 { 00:22:34.507 "cntlid": 137, 00:22:34.507 "qid": 0, 00:22:34.507 "state": "enabled", 00:22:34.507 "thread": "nvmf_tgt_poll_group_000", 00:22:34.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.507 "listen_address": { 00:22:34.507 "trtype": "TCP", 00:22:34.507 "adrfam": "IPv4", 00:22:34.507 "traddr": "10.0.0.2", 00:22:34.507 "trsvcid": "4420" 00:22:34.507 }, 00:22:34.507 "peer_address": { 00:22:34.507 "trtype": "TCP", 00:22:34.507 "adrfam": "IPv4", 00:22:34.507 "traddr": "10.0.0.1", 00:22:34.507 "trsvcid": "40742" 00:22:34.507 }, 00:22:34.507 "auth": { 00:22:34.507 "state": "completed", 00:22:34.507 "digest": "sha512", 00:22:34.507 "dhgroup": "ffdhe8192" 00:22:34.507 } 00:22:34.507 } 00:22:34.507 ]' 00:22:34.507 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.767 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.028 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:35.028 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:35.600 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.861 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.433 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.433 { 00:22:36.433 "cntlid": 139, 00:22:36.433 "qid": 0, 00:22:36.433 "state": "enabled", 00:22:36.433 "thread": "nvmf_tgt_poll_group_000", 00:22:36.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.433 "listen_address": { 00:22:36.433 "trtype": "TCP", 00:22:36.433 "adrfam": "IPv4", 00:22:36.433 "traddr": "10.0.0.2", 00:22:36.433 "trsvcid": "4420" 00:22:36.433 }, 00:22:36.433 "peer_address": { 00:22:36.433 "trtype": "TCP", 00:22:36.433 "adrfam": "IPv4", 00:22:36.433 "traddr": "10.0.0.1", 00:22:36.433 "trsvcid": "40764" 00:22:36.433 }, 00:22:36.433 "auth": { 00:22:36.433 "state": "completed", 00:22:36.433 "digest": "sha512", 00:22:36.433 "dhgroup": "ffdhe8192" 00:22:36.433 } 00:22:36.433 } 00:22:36.433 ]' 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.433 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:36.694 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: --dhchap-ctrl-secret DHHC-1:02:NmJmNTE3ZTM2MmNjNzIzOWRlNzY1ZDliMWJlMzI0NGM3NmZlMjk0Y2I0NTZjZDg03Mjx8A==: 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.634 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.204 00:22:38.204 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.204 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.204 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.204 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.205 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.205 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.205 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.465 { 00:22:38.465 "cntlid": 141, 00:22:38.465 "qid": 0, 00:22:38.465 "state": "enabled", 00:22:38.465 "thread": "nvmf_tgt_poll_group_000", 00:22:38.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:38.465 "listen_address": { 00:22:38.465 "trtype": "TCP", 00:22:38.465 "adrfam": "IPv4", 00:22:38.465 "traddr": "10.0.0.2", 00:22:38.465 "trsvcid": "4420" 00:22:38.465 }, 00:22:38.465 "peer_address": { 00:22:38.465 "trtype": "TCP", 00:22:38.465 "adrfam": "IPv4", 00:22:38.465 "traddr": "10.0.0.1", 00:22:38.465 "trsvcid": "40792" 00:22:38.465 }, 00:22:38.465 "auth": { 00:22:38.465 "state": "completed", 00:22:38.465 "digest": "sha512", 00:22:38.465 "dhgroup": "ffdhe8192" 00:22:38.465 } 00:22:38.465 } 00:22:38.465 ]' 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.465 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.725 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:38.725 22:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:01:MjE1MGM0YjViZTI3NTMxZjRjODQ4ZTFjMjI3NmI1ZGF4vj9I: 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.296 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.557 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:39.557 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.558 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.129 00:22:40.129 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.129 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.129 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.129 { 00:22:40.129 "cntlid": 143, 00:22:40.129 "qid": 0, 00:22:40.129 "state": "enabled", 00:22:40.129 "thread": "nvmf_tgt_poll_group_000", 00:22:40.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.129 "listen_address": { 00:22:40.129 "trtype": "TCP", 00:22:40.129 "adrfam": "IPv4", 00:22:40.129 "traddr": "10.0.0.2", 00:22:40.129 "trsvcid": "4420" 00:22:40.129 }, 00:22:40.129 "peer_address": { 00:22:40.129 "trtype": "TCP", 00:22:40.129 "adrfam": "IPv4", 00:22:40.129 "traddr": "10.0.0.1", 00:22:40.129 "trsvcid": "40828" 00:22:40.129 }, 00:22:40.129 "auth": { 00:22:40.129 "state": "completed", 00:22:40.129 "digest": "sha512", 00:22:40.129 "dhgroup": "ffdhe8192" 00:22:40.129 } 00:22:40.129 } 00:22:40.129 ]' 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.129 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.389 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.389 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.389 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.389 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.389 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.650 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:40.650 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:40.910 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.170 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.170 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.740 00:22:41.740 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.740 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.740 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.000 { 00:22:42.000 "cntlid": 145, 00:22:42.000 "qid": 0, 00:22:42.000 "state": "enabled", 00:22:42.000 "thread": "nvmf_tgt_poll_group_000", 00:22:42.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:42.000 "listen_address": { 00:22:42.000 "trtype": "TCP", 00:22:42.000 "adrfam": "IPv4", 00:22:42.000 "traddr": "10.0.0.2", 00:22:42.000 "trsvcid": "4420" 00:22:42.000 }, 00:22:42.000 "peer_address": { 00:22:42.000 "trtype": "TCP", 00:22:42.000 "adrfam": "IPv4", 00:22:42.000 "traddr": "10.0.0.1", 00:22:42.000 "trsvcid": "40844" 00:22:42.000 }, 00:22:42.000 "auth": { 00:22:42.000 "state": "completed", 00:22:42.000 "digest": "sha512", 00:22:42.000 "dhgroup": "ffdhe8192" 00:22:42.000 } 00:22:42.000 } 00:22:42.000 ]' 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.000 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.001 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.261 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:42.261 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGYyMDZmODI4MTQyNmEzNmI2ZGU5NTMxMDgwNjU5NTQ5OTc4Y2ZhOGI2ODdhMDM4UxsI1w==: --dhchap-ctrl-secret DHHC-1:03:OWIxOGE0YjQ3ZjhkODgzOGE3NzdkYzZkNGFhZGQ0NmU4NWRkNWM1ZTIxODU1YzIxZWI3ZGRiY2U1NmQwNDFiMtpsDW8=: 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:42.831 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:42.832 22:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:43.401 request: 00:22:43.401 { 00:22:43.401 "name": "nvme0", 00:22:43.401 "trtype": "tcp", 00:22:43.401 "traddr": "10.0.0.2", 00:22:43.401 "adrfam": "ipv4", 00:22:43.401 "trsvcid": "4420", 00:22:43.401 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:43.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:43.401 "prchk_reftag": false, 00:22:43.401 "prchk_guard": false, 00:22:43.401 "hdgst": false, 00:22:43.401 "ddgst": false, 00:22:43.401 "dhchap_key": "key2", 00:22:43.401 "allow_unrecognized_csi": false, 00:22:43.401 "method": "bdev_nvme_attach_controller", 00:22:43.401 "req_id": 1 00:22:43.401 } 00:22:43.401 Got JSON-RPC error response 00:22:43.401 response: 00:22:43.401 { 00:22:43.401 "code": -5, 00:22:43.401 "message": "Input/output error" 00:22:43.402 } 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.402 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.972 request: 00:22:43.972 { 00:22:43.972 "name": "nvme0", 00:22:43.972 "trtype": "tcp", 00:22:43.972 "traddr": "10.0.0.2", 00:22:43.972 "adrfam": "ipv4", 00:22:43.972 "trsvcid": "4420", 00:22:43.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:43.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:43.972 "prchk_reftag": false, 00:22:43.972 "prchk_guard": false, 00:22:43.972 "hdgst": false, 00:22:43.972 "ddgst": false, 00:22:43.972 "dhchap_key": "key1", 00:22:43.972 "dhchap_ctrlr_key": "ckey2", 00:22:43.972 "allow_unrecognized_csi": false, 00:22:43.972 "method": "bdev_nvme_attach_controller", 00:22:43.972 "req_id": 1 00:22:43.972 } 00:22:43.972 Got JSON-RPC error response 00:22:43.972 response: 00:22:43.972 { 00:22:43.972 "code": -5, 00:22:43.972 "message": "Input/output error" 00:22:43.972 } 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.972 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.232 request: 00:22:44.232 { 00:22:44.232 "name": "nvme0", 00:22:44.232 "trtype": "tcp", 00:22:44.232 "traddr": "10.0.0.2", 00:22:44.232 "adrfam": "ipv4", 00:22:44.232 "trsvcid": "4420", 00:22:44.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.232 "prchk_reftag": false, 00:22:44.232 "prchk_guard": false, 00:22:44.232 "hdgst": false, 00:22:44.232 "ddgst": false, 00:22:44.232 "dhchap_key": "key1", 00:22:44.232 "dhchap_ctrlr_key": "ckey1", 00:22:44.232 "allow_unrecognized_csi": false, 00:22:44.233 "method": "bdev_nvme_attach_controller", 00:22:44.233 "req_id": 1 00:22:44.233 } 00:22:44.233 Got JSON-RPC error response 00:22:44.233 response: 00:22:44.233 { 00:22:44.233 "code": -5, 00:22:44.233 "message": "Input/output error" 00:22:44.233 } 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 682673 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 682673 ']' 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 682673 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.233 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 682673 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 682673' 00:22:44.494 killing process with pid 682673 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 682673 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 682673 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=709038 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 709038 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 709038 ']' 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.494 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 709038 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 709038 ']' 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.435 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.696 null0 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tLg 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.696 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1Q3 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Q3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xik 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.9Hu ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9Hu 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oPo 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Dq1 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dq1 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qm6 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.697 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.639 nvme0n1 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.639 { 00:22:46.639 "cntlid": 1, 00:22:46.639 "qid": 0, 00:22:46.639 "state": "enabled", 00:22:46.639 "thread": "nvmf_tgt_poll_group_000", 00:22:46.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.639 "listen_address": { 00:22:46.639 "trtype": "TCP", 00:22:46.639 "adrfam": "IPv4", 00:22:46.639 "traddr": "10.0.0.2", 00:22:46.639 "trsvcid": "4420" 00:22:46.639 }, 00:22:46.639 "peer_address": { 00:22:46.639 "trtype": "TCP", 00:22:46.639 "adrfam": "IPv4", 00:22:46.639 "traddr": "10.0.0.1", 00:22:46.639 "trsvcid": "43656" 00:22:46.639 }, 00:22:46.639 "auth": { 00:22:46.639 "state": "completed", 00:22:46.639 "digest": "sha512", 00:22:46.639 "dhgroup": "ffdhe8192" 00:22:46.639 } 00:22:46.639 } 00:22:46.639 ]' 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.639 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.899 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.899 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.899 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.899 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.899 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.900 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:46.900 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:47.840 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.841 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.102 request: 00:22:48.102 { 00:22:48.102 "name": "nvme0", 00:22:48.102 "trtype": "tcp", 00:22:48.102 "traddr": "10.0.0.2", 00:22:48.102 "adrfam": "ipv4", 00:22:48.102 "trsvcid": "4420", 00:22:48.102 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.102 "prchk_reftag": false, 00:22:48.102 "prchk_guard": false, 00:22:48.102 "hdgst": false, 00:22:48.102 "ddgst": false, 00:22:48.102 "dhchap_key": "key3", 00:22:48.102 "allow_unrecognized_csi": false, 00:22:48.102 "method": "bdev_nvme_attach_controller", 00:22:48.102 "req_id": 1 00:22:48.102 } 00:22:48.102 Got JSON-RPC error response 00:22:48.102 response: 00:22:48.102 { 00:22:48.102 "code": -5, 00:22:48.102 "message": "Input/output error" 00:22:48.102 } 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:48.102 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.363 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.363 request: 00:22:48.363 { 00:22:48.363 "name": "nvme0", 00:22:48.363 "trtype": "tcp", 00:22:48.363 "traddr": "10.0.0.2", 00:22:48.363 "adrfam": "ipv4", 00:22:48.363 "trsvcid": "4420", 00:22:48.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.363 "prchk_reftag": false, 00:22:48.363 "prchk_guard": false, 00:22:48.363 "hdgst": false, 00:22:48.363 "ddgst": false, 00:22:48.363 "dhchap_key": "key3", 00:22:48.363 "allow_unrecognized_csi": false, 00:22:48.363 "method": "bdev_nvme_attach_controller", 00:22:48.363 "req_id": 1 00:22:48.364 } 00:22:48.364 Got JSON-RPC error response 00:22:48.364 response: 00:22:48.364 { 00:22:48.364 "code": -5, 00:22:48.364 "message": "Input/output error" 00:22:48.364 } 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.364 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.624 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.885 request: 00:22:48.885 { 00:22:48.885 "name": "nvme0", 00:22:48.885 "trtype": "tcp", 00:22:48.885 "traddr": "10.0.0.2", 00:22:48.885 "adrfam": "ipv4", 00:22:48.885 "trsvcid": "4420", 00:22:48.885 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.885 "prchk_reftag": false, 00:22:48.885 "prchk_guard": false, 00:22:48.885 "hdgst": false, 00:22:48.885 "ddgst": false, 00:22:48.885 "dhchap_key": "key0", 00:22:48.885 "dhchap_ctrlr_key": "key1", 00:22:48.885 "allow_unrecognized_csi": false, 00:22:48.885 "method": "bdev_nvme_attach_controller", 00:22:48.885 "req_id": 1 00:22:48.885 } 00:22:48.885 Got JSON-RPC error response 00:22:48.885 response: 00:22:48.885 { 00:22:48.885 "code": -5, 00:22:48.885 "message": "Input/output error" 00:22:48.885 } 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:48.885 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:49.145 nvme0n1 00:22:49.145 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:49.145 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.145 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:49.406 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.406 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.406 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:49.667 22:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.239 nvme0n1 00:22:50.239 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:50.239 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:50.239 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:50.500 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.761 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.761 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:50.761 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: --dhchap-ctrl-secret DHHC-1:03:ZTQ0YWY1YWJiOWZhOGVjZWE3ZDY1MmQ3ZDQ0OTYwMDg2Nzg1Yzc0NzFmOGRiNmE5ZWIyM2NlNThmOTY4NjA5OK52e1Y=: 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.331 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.591 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.851 request: 00:22:51.851 { 00:22:51.851 "name": "nvme0", 00:22:51.851 "trtype": "tcp", 00:22:51.851 "traddr": "10.0.0.2", 00:22:51.851 "adrfam": "ipv4", 00:22:51.851 "trsvcid": "4420", 00:22:51.851 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:51.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:51.851 "prchk_reftag": false, 00:22:51.851 "prchk_guard": false, 00:22:51.851 "hdgst": false, 00:22:51.851 "ddgst": false, 00:22:51.851 "dhchap_key": "key1", 00:22:51.851 "allow_unrecognized_csi": false, 00:22:51.851 "method": "bdev_nvme_attach_controller", 00:22:51.851 "req_id": 1 00:22:51.851 } 00:22:51.851 Got JSON-RPC error response 00:22:51.851 response: 00:22:51.851 { 00:22:51.851 "code": -5, 00:22:51.851 "message": "Input/output error" 00:22:51.851 } 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.110 22:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.680 nvme0n1 00:22:52.680 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:52.680 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:52.680 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.941 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.941 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.941 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.203 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.203 nvme0n1 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.463 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: '' 2s 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: ]] 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTdhNWU1ZWJlYWRkZjI3ZDM5NzViMzI2NTEyN2IzODOWqQTV: 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:53.723 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:53.724 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: 2s 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: ]] 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmI4YzNiMDliYWQ1NmYwOTUxNGY1YzcwOTk1Y2JkYThmNDBhN2YwMjAxZTFiZGY3YCwLVQ==: 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:55.634 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.253 22:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.516 nvme0n1 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:58.516 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.086 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:59.086 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:59.086 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:59.345 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:59.604 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:00.175 request: 00:23:00.175 { 00:23:00.175 "name": "nvme0", 00:23:00.175 "dhchap_key": "key1", 00:23:00.175 "dhchap_ctrlr_key": "key3", 00:23:00.175 "method": "bdev_nvme_set_keys", 00:23:00.175 "req_id": 1 00:23:00.175 } 00:23:00.175 Got JSON-RPC error response 00:23:00.175 response: 00:23:00.175 { 00:23:00.175 "code": -13, 00:23:00.175 "message": "Permission denied" 00:23:00.175 } 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:00.175 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.175 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:00.175 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:01.557 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:02.127 nvme0n1 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:02.128 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:02.698 request: 00:23:02.698 { 00:23:02.698 "name": "nvme0", 00:23:02.698 "dhchap_key": "key2", 00:23:02.698 "dhchap_ctrlr_key": "key0", 00:23:02.698 "method": "bdev_nvme_set_keys", 00:23:02.698 "req_id": 1 00:23:02.698 } 00:23:02.698 Got JSON-RPC error response 00:23:02.698 response: 00:23:02.698 { 00:23:02.698 "code": -13, 00:23:02.698 "message": "Permission denied" 00:23:02.698 } 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:02.698 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.959 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:02.959 22:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:03.900 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:03.900 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:03.900 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 683012 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 683012 ']' 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 683012 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.161 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 683012 00:23:04.162 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.162 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.162 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 683012' 00:23:04.162 killing process with pid 683012 00:23:04.162 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 683012 00:23:04.162 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 683012 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.423 rmmod nvme_tcp 00:23:04.423 rmmod nvme_fabrics 00:23:04.423 rmmod nvme_keyring 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 709038 ']' 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 709038 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 709038 ']' 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 709038 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709038 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.423 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.424 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709038' 00:23:04.424 killing process with pid 709038 00:23:04.424 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 709038 00:23:04.424 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 709038 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.684 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tLg /tmp/spdk.key-sha256.xik /tmp/spdk.key-sha384.oPo /tmp/spdk.key-sha512.Qm6 /tmp/spdk.key-sha512.1Q3 /tmp/spdk.key-sha384.9Hu /tmp/spdk.key-sha256.Dq1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:06.594 00:23:06.594 real 2m36.656s 00:23:06.594 user 5m52.191s 00:23:06.594 sys 0m24.632s 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.594 ************************************ 00:23:06.594 END TEST nvmf_auth_target 00:23:06.594 ************************************ 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:06.594 22:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:06.855 ************************************ 00:23:06.855 START TEST nvmf_bdevio_no_huge 00:23:06.855 ************************************ 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:06.855 * Looking for test storage... 00:23:06.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.855 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:06.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.856 --rc genhtml_branch_coverage=1 00:23:06.856 --rc genhtml_function_coverage=1 00:23:06.856 --rc genhtml_legend=1 00:23:06.856 --rc geninfo_all_blocks=1 00:23:06.856 --rc geninfo_unexecuted_blocks=1 00:23:06.856 00:23:06.856 ' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:06.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.856 --rc genhtml_branch_coverage=1 00:23:06.856 --rc genhtml_function_coverage=1 00:23:06.856 --rc genhtml_legend=1 00:23:06.856 --rc geninfo_all_blocks=1 00:23:06.856 --rc geninfo_unexecuted_blocks=1 00:23:06.856 00:23:06.856 ' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:06.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.856 --rc genhtml_branch_coverage=1 00:23:06.856 --rc genhtml_function_coverage=1 00:23:06.856 --rc genhtml_legend=1 00:23:06.856 --rc geninfo_all_blocks=1 00:23:06.856 --rc geninfo_unexecuted_blocks=1 00:23:06.856 00:23:06.856 ' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:06.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.856 --rc genhtml_branch_coverage=1 00:23:06.856 --rc genhtml_function_coverage=1 00:23:06.856 --rc genhtml_legend=1 00:23:06.856 --rc geninfo_all_blocks=1 00:23:06.856 --rc geninfo_unexecuted_blocks=1 00:23:06.856 00:23:06.856 ' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.856 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.857 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:06.857 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:06.857 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.857 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:14.995 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:14.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:14.995 Found net devices under 0000:31:00.0: cvl_0_0 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.995 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:14.996 Found net devices under 0000:31:00.1: cvl_0_1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:23:14.996 00:23:14.996 --- 10.0.0.2 ping statistics --- 00:23:14.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.996 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:23:14.996 00:23:14.996 --- 10.0.0.1 ping statistics --- 00:23:14.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.996 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=717486 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 717486 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 717486 ']' 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.996 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.996 [2024-09-30 22:51:41.667773] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:14.996 [2024-09-30 22:51:41.667842] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:14.996 [2024-09-30 22:51:41.765571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.996 [2024-09-30 22:51:41.874807] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.996 [2024-09-30 22:51:41.874860] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.996 [2024-09-30 22:51:41.874869] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.996 [2024-09-30 22:51:41.874877] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.996 [2024-09-30 22:51:41.874883] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.996 [2024-09-30 22:51:41.875045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.996 [2024-09-30 22:51:41.875336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:14.996 [2024-09-30 22:51:41.875494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:14.996 [2024-09-30 22:51:41.875495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.568 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.569 [2024-09-30 22:51:42.551599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.569 Malloc0 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.569 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.830 [2024-09-30 22:51:42.605424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:15.830 { 00:23:15.830 "params": { 00:23:15.830 "name": "Nvme$subsystem", 00:23:15.830 "trtype": "$TEST_TRANSPORT", 00:23:15.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.830 "adrfam": "ipv4", 00:23:15.830 "trsvcid": "$NVMF_PORT", 00:23:15.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.830 "hdgst": ${hdgst:-false}, 00:23:15.830 "ddgst": ${ddgst:-false} 00:23:15.830 }, 00:23:15.830 "method": "bdev_nvme_attach_controller" 00:23:15.830 } 00:23:15.830 EOF 00:23:15.830 )") 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:15.830 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:15.830 "params": { 00:23:15.830 "name": "Nvme1", 00:23:15.830 "trtype": "tcp", 00:23:15.830 "traddr": "10.0.0.2", 00:23:15.830 "adrfam": "ipv4", 00:23:15.830 "trsvcid": "4420", 00:23:15.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.830 "hdgst": false, 00:23:15.830 "ddgst": false 00:23:15.830 }, 00:23:15.830 "method": "bdev_nvme_attach_controller" 00:23:15.830 }' 00:23:15.830 [2024-09-30 22:51:42.663713] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:15.830 [2024-09-30 22:51:42.663782] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid717541 ] 00:23:15.830 [2024-09-30 22:51:42.751299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:16.091 [2024-09-30 22:51:42.859315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.091 [2024-09-30 22:51:42.859479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.091 [2024-09-30 22:51:42.859479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.352 I/O targets: 00:23:16.352 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:16.352 00:23:16.352 00:23:16.352 CUnit - A unit testing framework for C - Version 2.1-3 00:23:16.352 http://cunit.sourceforge.net/ 00:23:16.352 00:23:16.352 00:23:16.352 Suite: bdevio tests on: Nvme1n1 00:23:16.352 Test: blockdev write read block ...passed 00:23:16.352 Test: blockdev write zeroes read block ...passed 00:23:16.352 Test: blockdev write zeroes read no split ...passed 00:23:16.352 Test: blockdev write zeroes read split ...passed 00:23:16.352 Test: blockdev write zeroes read split partial ...passed 00:23:16.352 Test: blockdev reset ...[2024-09-30 22:51:43.344413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:16.352 [2024-09-30 22:51:43.344513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4250 (9): Bad file descriptor 00:23:16.613 [2024-09-30 22:51:43.415413] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.613 passed 00:23:16.613 Test: blockdev write read 8 blocks ...passed 00:23:16.613 Test: blockdev write read size > 128k ...passed 00:23:16.613 Test: blockdev write read invalid size ...passed 00:23:16.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:16.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:16.613 Test: blockdev write read max offset ...passed 00:23:16.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:16.613 Test: blockdev writev readv 8 blocks ...passed 00:23:16.613 Test: blockdev writev readv 30 x 1block ...passed 00:23:16.613 Test: blockdev writev readv block ...passed 00:23:16.613 Test: blockdev writev readv size > 128k ...passed 00:23:16.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:16.613 Test: blockdev comparev and writev ...[2024-09-30 22:51:43.600144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.613 [2024-09-30 22:51:43.600194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.613 [2024-09-30 22:51:43.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.613 [2024-09-30 22:51:43.600220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.613 [2024-09-30 22:51:43.600766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.600778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:16.614 [2024-09-30 22:51:43.600799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.600807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:16.614 [2024-09-30 22:51:43.601389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.601401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:16.614 [2024-09-30 22:51:43.601416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.601424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:16.614 [2024-09-30 22:51:43.601988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.601999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:16.614 [2024-09-30 22:51:43.602014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:16.614 [2024-09-30 22:51:43.602022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:16.875 passed 00:23:16.875 Test: blockdev nvme passthru rw ...passed 00:23:16.875 Test: blockdev nvme passthru vendor specific ...[2024-09-30 22:51:43.686871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.875 [2024-09-30 22:51:43.686887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:16.875 [2024-09-30 22:51:43.687285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.875 [2024-09-30 22:51:43.687296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:16.875 [2024-09-30 22:51:43.687690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.875 [2024-09-30 22:51:43.687701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:16.875 [2024-09-30 22:51:43.688120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.875 [2024-09-30 22:51:43.688131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:16.875 passed 00:23:16.875 Test: blockdev nvme admin passthru ...passed 00:23:16.875 Test: blockdev copy ...passed 00:23:16.875 00:23:16.875 Run Summary: Type Total Ran Passed Failed Inactive 00:23:16.875 suites 1 1 n/a 0 0 00:23:16.875 tests 23 23 23 0 0 00:23:16.875 asserts 152 152 152 0 n/a 00:23:16.875 00:23:16.875 Elapsed time = 1.175 seconds 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.136 rmmod nvme_tcp 00:23:17.136 rmmod nvme_fabrics 00:23:17.136 rmmod nvme_keyring 00:23:17.136 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 717486 ']' 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 717486 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 717486 ']' 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 717486 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 717486 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 717486' 00:23:17.397 killing process with pid 717486 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 717486 00:23:17.397 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 717486 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.658 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.202 00:23:20.202 real 0m13.030s 00:23:20.202 user 0m15.095s 00:23:20.202 sys 0m7.046s 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.202 ************************************ 00:23:20.202 END TEST nvmf_bdevio_no_huge 00:23:20.202 ************************************ 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:20.202 ************************************ 00:23:20.202 START TEST nvmf_tls 00:23:20.202 ************************************ 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:20.202 * Looking for test storage... 00:23:20.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:20.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.202 --rc genhtml_branch_coverage=1 00:23:20.202 --rc genhtml_function_coverage=1 00:23:20.202 --rc genhtml_legend=1 00:23:20.202 --rc geninfo_all_blocks=1 00:23:20.202 --rc geninfo_unexecuted_blocks=1 00:23:20.202 00:23:20.202 ' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:20.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.202 --rc genhtml_branch_coverage=1 00:23:20.202 --rc genhtml_function_coverage=1 00:23:20.202 --rc genhtml_legend=1 00:23:20.202 --rc geninfo_all_blocks=1 00:23:20.202 --rc geninfo_unexecuted_blocks=1 00:23:20.202 00:23:20.202 ' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:20.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.202 --rc genhtml_branch_coverage=1 00:23:20.202 --rc genhtml_function_coverage=1 00:23:20.202 --rc genhtml_legend=1 00:23:20.202 --rc geninfo_all_blocks=1 00:23:20.202 --rc geninfo_unexecuted_blocks=1 00:23:20.202 00:23:20.202 ' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:20.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.202 --rc genhtml_branch_coverage=1 00:23:20.202 --rc genhtml_function_coverage=1 00:23:20.202 --rc genhtml_legend=1 00:23:20.202 --rc geninfo_all_blocks=1 00:23:20.202 --rc geninfo_unexecuted_blocks=1 00:23:20.202 00:23:20.202 ' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.202 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.203 22:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:28.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:28.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:28.343 Found net devices under 0000:31:00.0: cvl_0_0 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:28.343 Found net devices under 0000:31:00.1: cvl_0_1 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.343 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:23:28.344 00:23:28.344 --- 10.0.0.2 ping statistics --- 00:23:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.344 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:28.344 00:23:28.344 --- 10.0.0.1 ping statistics --- 00:23:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.344 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=722247 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 722247 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 722247 ']' 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.344 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.344 [2024-09-30 22:51:54.750587] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:28.344 [2024-09-30 22:51:54.750655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.344 [2024-09-30 22:51:54.842789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.344 [2024-09-30 22:51:54.935592] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.344 [2024-09-30 22:51:54.935648] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.344 [2024-09-30 22:51:54.935657] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.344 [2024-09-30 22:51:54.935664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.344 [2024-09-30 22:51:54.935670] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.344 [2024-09-30 22:51:54.935695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.605 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.605 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.605 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:28.605 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.605 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.866 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.866 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:28.866 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:28.866 true 00:23:28.866 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:28.866 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:29.127 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:29.127 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:29.127 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:29.388 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:29.388 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:29.388 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:29.388 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:29.388 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:29.648 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:29.648 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:29.909 22:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:30.170 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:30.170 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:30.431 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:30.431 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:30.431 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:30.431 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:30.431 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:30.691 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.BOSberIpCK 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.EalY2zLeIu 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BOSberIpCK 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.EalY2zLeIu 00:23:30.692 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:30.952 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:31.213 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.BOSberIpCK 00:23:31.213 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BOSberIpCK 00:23:31.213 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.473 [2024-09-30 22:51:58.237205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.473 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.473 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.732 [2024-09-30 22:51:58.570009] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.733 [2024-09-30 22:51:58.570207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.733 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.733 malloc0 00:23:31.993 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.993 22:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BOSberIpCK 00:23:32.253 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.253 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BOSberIpCK 00:23:44.491 Initializing NVMe Controllers 00:23:44.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.491 Initialization complete. Launching workers. 00:23:44.492 ======================================================== 00:23:44.492 Latency(us) 00:23:44.492 Device Information : IOPS MiB/s Average min max 00:23:44.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18616.03 72.72 3438.10 1178.55 4188.73 00:23:44.492 ======================================================== 00:23:44.492 Total : 18616.03 72.72 3438.10 1178.55 4188.73 00:23:44.492 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BOSberIpCK 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BOSberIpCK 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=725101 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 725101 /var/tmp/bdevperf.sock 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 725101 ']' 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.492 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.492 [2024-09-30 22:52:09.407195] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:44.492 [2024-09-30 22:52:09.407255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725101 ] 00:23:44.492 [2024-09-30 22:52:09.483502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.492 [2024-09-30 22:52:09.546878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.492 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.492 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:44.492 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BOSberIpCK 00:23:44.492 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.492 [2024-09-30 22:52:10.498122] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.492 TLSTESTn1 00:23:44.492 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.492 Running I/O for 10 seconds... 00:23:54.113 4932.00 IOPS, 19.27 MiB/s 4793.50 IOPS, 18.72 MiB/s 4747.33 IOPS, 18.54 MiB/s 4966.00 IOPS, 19.40 MiB/s 5191.60 IOPS, 20.28 MiB/s 5414.67 IOPS, 21.15 MiB/s 5400.57 IOPS, 21.10 MiB/s 5268.88 IOPS, 20.58 MiB/s 5261.22 IOPS, 20.55 MiB/s 5342.60 IOPS, 20.87 MiB/s 00:23:54.113 Latency(us) 00:23:54.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.113 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.113 Verification LBA range: start 0x0 length 0x2000 00:23:54.113 TLSTESTn1 : 10.02 5342.68 20.87 0.00 0.00 23916.72 5734.40 64662.19 00:23:54.113 =================================================================================================================== 00:23:54.113 Total : 5342.68 20.87 0.00 0.00 23916.72 5734.40 64662.19 00:23:54.113 { 00:23:54.113 "results": [ 00:23:54.113 { 00:23:54.113 "job": "TLSTESTn1", 00:23:54.114 "core_mask": "0x4", 00:23:54.114 "workload": "verify", 00:23:54.114 "status": "finished", 00:23:54.114 "verify_range": { 00:23:54.114 "start": 0, 00:23:54.114 "length": 8192 00:23:54.114 }, 00:23:54.114 "queue_depth": 128, 00:23:54.114 "io_size": 4096, 00:23:54.114 "runtime": 10.023626, 00:23:54.114 "iops": 5342.677390397447, 00:23:54.114 "mibps": 20.869833556240028, 00:23:54.114 "io_failed": 0, 00:23:54.114 "io_timeout": 0, 00:23:54.114 "avg_latency_us": 23916.72473400183, 00:23:54.114 "min_latency_us": 5734.4, 00:23:54.114 "max_latency_us": 64662.18666666667 00:23:54.114 } 00:23:54.114 ], 00:23:54.114 "core_count": 1 00:23:54.114 } 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 725101 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 725101 ']' 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 725101 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 725101 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 725101' 00:23:54.114 killing process with pid 725101 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 725101 00:23:54.114 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.114 00:23:54.114 Latency(us) 00:23:54.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.114 =================================================================================================================== 00:23:54.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 725101 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EalY2zLeIu 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EalY2zLeIu 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EalY2zLeIu 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EalY2zLeIu 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=727349 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 727349 /var/tmp/bdevperf.sock 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 727349 ']' 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.114 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.114 [2024-09-30 22:52:20.982463] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:54.114 [2024-09-30 22:52:20.982519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727349 ] 00:23:54.114 [2024-09-30 22:52:21.058625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.114 [2024-09-30 22:52:21.110022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.056 22:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.056 22:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.056 22:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EalY2zLeIu 00:23:55.056 22:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.317 [2024-09-30 22:52:22.124777] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.317 [2024-09-30 22:52:22.129358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.317 [2024-09-30 22:52:22.129987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8fc00 (107): Transport endpoint is not connected 00:23:55.317 [2024-09-30 22:52:22.130982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8fc00 (9): Bad file descriptor 00:23:55.317 [2024-09-30 22:52:22.131984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.317 [2024-09-30 22:52:22.131991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:55.317 [2024-09-30 22:52:22.131997] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:55.317 [2024-09-30 22:52:22.132005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.317 request: 00:23:55.317 { 00:23:55.317 "name": "TLSTEST", 00:23:55.317 "trtype": "tcp", 00:23:55.317 "traddr": "10.0.0.2", 00:23:55.317 "adrfam": "ipv4", 00:23:55.317 "trsvcid": "4420", 00:23:55.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.317 "prchk_reftag": false, 00:23:55.317 "prchk_guard": false, 00:23:55.317 "hdgst": false, 00:23:55.317 "ddgst": false, 00:23:55.317 "psk": "key0", 00:23:55.317 "allow_unrecognized_csi": false, 00:23:55.317 "method": "bdev_nvme_attach_controller", 00:23:55.317 "req_id": 1 00:23:55.317 } 00:23:55.317 Got JSON-RPC error response 00:23:55.317 response: 00:23:55.317 { 00:23:55.317 "code": -5, 00:23:55.317 "message": "Input/output error" 00:23:55.317 } 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 727349 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 727349 ']' 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 727349 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 727349 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 727349' 00:23:55.317 killing process with pid 727349 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 727349 00:23:55.317 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.317 00:23:55.317 Latency(us) 00:23:55.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.317 =================================================================================================================== 00:23:55.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 727349 00:23:55.317 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BOSberIpCK 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BOSberIpCK 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BOSberIpCK 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BOSberIpCK 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=727689 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 727689 /var/tmp/bdevperf.sock 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 727689 ']' 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.579 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.579 [2024-09-30 22:52:22.393210] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:55.579 [2024-09-30 22:52:22.393262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727689 ] 00:23:55.579 [2024-09-30 22:52:22.471490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.579 [2024-09-30 22:52:22.521719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.523 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.523 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.523 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BOSberIpCK 00:23:56.523 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:56.523 [2024-09-30 22:52:23.528308] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.523 [2024-09-30 22:52:23.532808] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-09-30 22:52:23.532826] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-09-30 22:52:23.532845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:56.523 [2024-09-30 22:52:23.533501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eec00 (107): Transport endpoint is not connected 00:23:56.523 [2024-09-30 22:52:23.534496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eec00 (9): Bad file descriptor 00:23:56.523 [2024-09-30 22:52:23.535498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:56.523 [2024-09-30 22:52:23.535505] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:56.523 [2024-09-30 22:52:23.535511] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:56.523 [2024-09-30 22:52:23.535518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:56.523 request: 00:23:56.523 { 00:23:56.523 "name": "TLSTEST", 00:23:56.523 "trtype": "tcp", 00:23:56.523 "traddr": "10.0.0.2", 00:23:56.523 "adrfam": "ipv4", 00:23:56.523 "trsvcid": "4420", 00:23:56.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.523 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.523 "prchk_reftag": false, 00:23:56.523 "prchk_guard": false, 00:23:56.523 "hdgst": false, 00:23:56.523 "ddgst": false, 00:23:56.523 "psk": "key0", 00:23:56.523 "allow_unrecognized_csi": false, 00:23:56.523 "method": "bdev_nvme_attach_controller", 00:23:56.523 "req_id": 1 00:23:56.523 } 00:23:56.523 Got JSON-RPC error response 00:23:56.523 response: 00:23:56.523 { 00:23:56.523 "code": -5, 00:23:56.523 "message": "Input/output error" 00:23:56.523 } 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 727689 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 727689 ']' 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 727689 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 727689 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 727689' 00:23:56.784 killing process with pid 727689 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 727689 00:23:56.784 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.784 00:23:56.784 Latency(us) 00:23:56.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.784 =================================================================================================================== 00:23:56.784 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 727689 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BOSberIpCK 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BOSberIpCK 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BOSberIpCK 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BOSberIpCK 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=728033 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 728033 /var/tmp/bdevperf.sock 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 728033 ']' 00:23:56.784 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.785 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.785 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.785 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.785 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.045 [2024-09-30 22:52:23.807017] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:57.045 [2024-09-30 22:52:23.807077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728033 ] 00:23:57.045 [2024-09-30 22:52:23.882685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.045 [2024-09-30 22:52:23.934189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.617 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.617 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:57.617 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BOSberIpCK 00:23:57.915 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.198 [2024-09-30 22:52:24.912572] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.198 [2024-09-30 22:52:24.916970] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:58.198 [2024-09-30 22:52:24.916989] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:58.198 [2024-09-30 22:52:24.917014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:58.198 [2024-09-30 22:52:24.917669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208bc00 (107): Transport endpoint is not connected 00:23:58.198 [2024-09-30 22:52:24.918664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208bc00 (9): Bad file descriptor 00:23:58.198 [2024-09-30 22:52:24.919666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:58.198 [2024-09-30 22:52:24.919673] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:58.198 [2024-09-30 22:52:24.919679] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:58.198 [2024-09-30 22:52:24.919687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:58.198 request: 00:23:58.198 { 00:23:58.198 "name": "TLSTEST", 00:23:58.198 "trtype": "tcp", 00:23:58.198 "traddr": "10.0.0.2", 00:23:58.198 "adrfam": "ipv4", 00:23:58.198 "trsvcid": "4420", 00:23:58.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.198 "prchk_reftag": false, 00:23:58.198 "prchk_guard": false, 00:23:58.198 "hdgst": false, 00:23:58.198 "ddgst": false, 00:23:58.198 "psk": "key0", 00:23:58.198 "allow_unrecognized_csi": false, 00:23:58.198 "method": "bdev_nvme_attach_controller", 00:23:58.198 "req_id": 1 00:23:58.198 } 00:23:58.198 Got JSON-RPC error response 00:23:58.198 response: 00:23:58.198 { 00:23:58.198 "code": -5, 00:23:58.198 "message": "Input/output error" 00:23:58.198 } 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 728033 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 728033 ']' 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 728033 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 728033 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 728033' 00:23:58.198 killing process with pid 728033 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 728033 00:23:58.198 Received shutdown signal, test time was about 10.000000 seconds 00:23:58.198 00:23:58.198 Latency(us) 00:23:58.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.198 =================================================================================================================== 00:23:58.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:58.198 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 728033 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=728232 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 728232 /var/tmp/bdevperf.sock 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 728232 ']' 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.198 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.198 [2024-09-30 22:52:25.162199] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:58.198 [2024-09-30 22:52:25.162257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728232 ] 00:23:58.505 [2024-09-30 22:52:25.238171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.505 [2024-09-30 22:52:25.290021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.075 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.075 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:59.075 22:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:59.336 [2024-09-30 22:52:26.099863] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:59.336 [2024-09-30 22:52:26.099887] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:59.336 request: 00:23:59.336 { 00:23:59.336 "name": "key0", 00:23:59.336 "path": "", 00:23:59.336 "method": "keyring_file_add_key", 00:23:59.336 "req_id": 1 00:23:59.336 } 00:23:59.336 Got JSON-RPC error response 00:23:59.336 response: 00:23:59.336 { 00:23:59.336 "code": -1, 00:23:59.336 "message": "Operation not permitted" 00:23:59.336 } 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.336 [2024-09-30 22:52:26.268360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.336 [2024-09-30 22:52:26.268380] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:59.336 request: 00:23:59.336 { 00:23:59.336 "name": "TLSTEST", 00:23:59.336 "trtype": "tcp", 00:23:59.336 "traddr": "10.0.0.2", 00:23:59.336 "adrfam": "ipv4", 00:23:59.336 "trsvcid": "4420", 00:23:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.336 "prchk_reftag": false, 00:23:59.336 "prchk_guard": false, 00:23:59.336 "hdgst": false, 00:23:59.336 "ddgst": false, 00:23:59.336 "psk": "key0", 00:23:59.336 "allow_unrecognized_csi": false, 00:23:59.336 "method": "bdev_nvme_attach_controller", 00:23:59.336 "req_id": 1 00:23:59.336 } 00:23:59.336 Got JSON-RPC error response 00:23:59.336 response: 00:23:59.336 { 00:23:59.336 "code": -126, 00:23:59.336 "message": "Required key not available" 00:23:59.336 } 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 728232 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 728232 ']' 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 728232 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 728232 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 728232' 00:23:59.336 killing process with pid 728232 00:23:59.336 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 728232 00:23:59.336 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.336 00:23:59.336 Latency(us) 00:23:59.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.336 =================================================================================================================== 00:23:59.337 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.337 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 728232 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 722247 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 722247 ']' 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 722247 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 722247 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 722247' 00:23:59.596 killing process with pid 722247 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 722247 00:23:59.596 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 722247 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.sD2gyi45yI 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.sD2gyi45yI 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=728543 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 728543 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 728543 ']' 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.857 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.857 [2024-09-30 22:52:26.756001] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:23:59.857 [2024-09-30 22:52:26.756063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.857 [2024-09-30 22:52:26.842113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.117 [2024-09-30 22:52:26.898987] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.117 [2024-09-30 22:52:26.899021] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.117 [2024-09-30 22:52:26.899026] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.117 [2024-09-30 22:52:26.899031] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.117 [2024-09-30 22:52:26.899035] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.117 [2024-09-30 22:52:26.899055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sD2gyi45yI 00:24:00.688 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:00.948 [2024-09-30 22:52:27.750825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.948 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:00.948 22:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:01.208 [2024-09-30 22:52:28.087649] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.208 [2024-09-30 22:52:28.087856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.208 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:01.469 malloc0 00:24:01.469 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:01.469 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:01.730 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD2gyi45yI 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sD2gyi45yI 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=729058 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 729058 /var/tmp/bdevperf.sock 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 729058 ']' 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.991 22:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 [2024-09-30 22:52:28.820754] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:01.991 [2024-09-30 22:52:28.820809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729058 ] 00:24:01.991 [2024-09-30 22:52:28.895903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.991 [2024-09-30 22:52:28.947898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.932 22:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.932 22:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:02.933 22:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:02.933 22:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.933 [2024-09-30 22:52:29.922349] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.193 TLSTESTn1 00:24:03.193 22:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:03.193 Running I/O for 10 seconds... 00:24:13.177 6095.00 IOPS, 23.81 MiB/s 6121.00 IOPS, 23.91 MiB/s 6175.00 IOPS, 24.12 MiB/s 6291.75 IOPS, 24.58 MiB/s 6156.80 IOPS, 24.05 MiB/s 6187.83 IOPS, 24.17 MiB/s 6150.29 IOPS, 24.02 MiB/s 6058.38 IOPS, 23.67 MiB/s 6055.78 IOPS, 23.66 MiB/s 6087.60 IOPS, 23.78 MiB/s 00:24:13.177 Latency(us) 00:24:13.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.177 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:13.177 Verification LBA range: start 0x0 length 0x2000 00:24:13.177 TLSTESTn1 : 10.01 6093.10 23.80 0.00 0.00 20977.33 4696.75 22282.24 00:24:13.177 =================================================================================================================== 00:24:13.177 Total : 6093.10 23.80 0.00 0.00 20977.33 4696.75 22282.24 00:24:13.177 { 00:24:13.177 "results": [ 00:24:13.177 { 00:24:13.177 "job": "TLSTESTn1", 00:24:13.177 "core_mask": "0x4", 00:24:13.177 "workload": "verify", 00:24:13.177 "status": "finished", 00:24:13.177 "verify_range": { 00:24:13.177 "start": 0, 00:24:13.177 "length": 8192 00:24:13.177 }, 00:24:13.177 "queue_depth": 128, 00:24:13.177 "io_size": 4096, 00:24:13.177 "runtime": 10.011659, 00:24:13.177 "iops": 6093.0960593044565, 00:24:13.177 "mibps": 23.801156481658033, 00:24:13.177 "io_failed": 0, 00:24:13.177 "io_timeout": 0, 00:24:13.177 "avg_latency_us": 20977.33165295127, 00:24:13.177 "min_latency_us": 4696.746666666667, 00:24:13.177 "max_latency_us": 22282.24 00:24:13.177 } 00:24:13.177 ], 00:24:13.177 "core_count": 1 00:24:13.177 } 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 729058 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 729058 ']' 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 729058 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.177 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 729058 00:24:13.437 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 729058' 00:24:13.438 killing process with pid 729058 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 729058 00:24:13.438 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.438 00:24:13.438 Latency(us) 00:24:13.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.438 =================================================================================================================== 00:24:13.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 729058 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.sD2gyi45yI 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD2gyi45yI 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD2gyi45yI 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD2gyi45yI 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sD2gyi45yI 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=731148 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 731148 /var/tmp/bdevperf.sock 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 731148 ']' 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.438 22:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.438 [2024-09-30 22:52:40.402443] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:13.438 [2024-09-30 22:52:40.402503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731148 ] 00:24:13.698 [2024-09-30 22:52:40.478597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.698 [2024-09-30 22:52:40.530054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.268 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.268 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.268 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:14.529 [2024-09-30 22:52:41.356147] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sD2gyi45yI': 0100666 00:24:14.529 [2024-09-30 22:52:41.356173] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:14.529 request: 00:24:14.529 { 00:24:14.529 "name": "key0", 00:24:14.529 "path": "/tmp/tmp.sD2gyi45yI", 00:24:14.529 "method": "keyring_file_add_key", 00:24:14.529 "req_id": 1 00:24:14.529 } 00:24:14.529 Got JSON-RPC error response 00:24:14.529 response: 00:24:14.529 { 00:24:14.529 "code": -1, 00:24:14.529 "message": "Operation not permitted" 00:24:14.529 } 00:24:14.529 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:14.529 [2024-09-30 22:52:41.532662] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.529 [2024-09-30 22:52:41.532680] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:14.529 request: 00:24:14.529 { 00:24:14.529 "name": "TLSTEST", 00:24:14.529 "trtype": "tcp", 00:24:14.529 "traddr": "10.0.0.2", 00:24:14.529 "adrfam": "ipv4", 00:24:14.529 "trsvcid": "4420", 00:24:14.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.529 "prchk_reftag": false, 00:24:14.529 "prchk_guard": false, 00:24:14.529 "hdgst": false, 00:24:14.529 "ddgst": false, 00:24:14.529 "psk": "key0", 00:24:14.529 "allow_unrecognized_csi": false, 00:24:14.529 "method": "bdev_nvme_attach_controller", 00:24:14.529 "req_id": 1 00:24:14.529 } 00:24:14.529 Got JSON-RPC error response 00:24:14.529 response: 00:24:14.529 { 00:24:14.529 "code": -126, 00:24:14.529 "message": "Required key not available" 00:24:14.529 } 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 731148 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 731148 ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 731148 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 731148 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 731148' 00:24:14.791 killing process with pid 731148 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 731148 00:24:14.791 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.791 00:24:14.791 Latency(us) 00:24:14.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.791 =================================================================================================================== 00:24:14.791 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 731148 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 728543 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 728543 ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 728543 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 728543 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.791 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 728543' 00:24:14.792 killing process with pid 728543 00:24:14.792 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 728543 00:24:14.792 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 728543 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=731476 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 731476 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 731476 ']' 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.052 22:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 [2024-09-30 22:52:41.986763] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:15.052 [2024-09-30 22:52:41.986817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.313 [2024-09-30 22:52:42.070922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.313 [2024-09-30 22:52:42.125074] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.313 [2024-09-30 22:52:42.125111] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.313 [2024-09-30 22:52:42.125117] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.313 [2024-09-30 22:52:42.125121] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.313 [2024-09-30 22:52:42.125126] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.313 [2024-09-30 22:52:42.125142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sD2gyi45yI 00:24:15.883 22:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.143 [2024-09-30 22:52:42.984270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.143 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.404 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.404 [2024-09-30 22:52:43.345156] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.404 [2024-09-30 22:52:43.345342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.404 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.664 malloc0 00:24:16.664 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:16.925 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:16.925 [2024-09-30 22:52:43.892154] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sD2gyi45yI': 0100666 00:24:16.925 [2024-09-30 22:52:43.892174] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:16.925 request: 00:24:16.925 { 00:24:16.925 "name": "key0", 00:24:16.925 "path": "/tmp/tmp.sD2gyi45yI", 00:24:16.925 "method": "keyring_file_add_key", 00:24:16.925 "req_id": 1 00:24:16.925 } 00:24:16.925 Got JSON-RPC error response 00:24:16.925 response: 00:24:16.925 { 00:24:16.925 "code": -1, 00:24:16.925 "message": "Operation not permitted" 00:24:16.925 } 00:24:16.925 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.187 [2024-09-30 22:52:44.068612] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:17.187 [2024-09-30 22:52:44.068638] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:17.187 request: 00:24:17.187 { 00:24:17.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.187 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.187 "psk": "key0", 00:24:17.187 "method": "nvmf_subsystem_add_host", 00:24:17.187 "req_id": 1 00:24:17.187 } 00:24:17.187 Got JSON-RPC error response 00:24:17.187 response: 00:24:17.187 { 00:24:17.187 "code": -32603, 00:24:17.187 "message": "Internal error" 00:24:17.187 } 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 731476 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 731476 ']' 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 731476 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 731476 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 731476' 00:24:17.187 killing process with pid 731476 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 731476 00:24:17.187 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 731476 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.sD2gyi45yI 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=732060 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 732060 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 732060 ']' 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.448 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.448 [2024-09-30 22:52:44.351645] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:17.448 [2024-09-30 22:52:44.351698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.448 [2024-09-30 22:52:44.435670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.709 [2024-09-30 22:52:44.494656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.709 [2024-09-30 22:52:44.494695] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.709 [2024-09-30 22:52:44.494701] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.709 [2024-09-30 22:52:44.494706] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.709 [2024-09-30 22:52:44.494710] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.709 [2024-09-30 22:52:44.494734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sD2gyi45yI 00:24:18.281 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.541 [2024-09-30 22:52:45.351283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.541 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:18.541 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:18.803 [2024-09-30 22:52:45.688110] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.803 [2024-09-30 22:52:45.688316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.803 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:19.063 malloc0 00:24:19.063 22:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:19.063 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:19.323 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=732535 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 732535 /var/tmp/bdevperf.sock 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 732535 ']' 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.584 22:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.584 [2024-09-30 22:52:46.411392] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:19.584 [2024-09-30 22:52:46.411447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732535 ] 00:24:19.584 [2024-09-30 22:52:46.488341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.584 [2024-09-30 22:52:46.540137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.525 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.525 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.525 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:20.525 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.525 [2024-09-30 22:52:47.502496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.786 TLSTESTn1 00:24:20.786 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:21.047 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:21.047 "subsystems": [ 00:24:21.047 { 00:24:21.047 "subsystem": "keyring", 00:24:21.047 "config": [ 00:24:21.047 { 00:24:21.047 "method": "keyring_file_add_key", 00:24:21.047 "params": { 00:24:21.047 "name": "key0", 00:24:21.047 "path": "/tmp/tmp.sD2gyi45yI" 00:24:21.047 } 00:24:21.047 } 00:24:21.047 ] 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "subsystem": "iobuf", 00:24:21.047 "config": [ 00:24:21.047 { 00:24:21.047 "method": "iobuf_set_options", 00:24:21.047 "params": { 00:24:21.047 "small_pool_count": 8192, 00:24:21.047 "large_pool_count": 1024, 00:24:21.047 "small_bufsize": 8192, 00:24:21.047 "large_bufsize": 135168 00:24:21.047 } 00:24:21.047 } 00:24:21.047 ] 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "subsystem": "sock", 00:24:21.047 "config": [ 00:24:21.047 { 00:24:21.047 "method": "sock_set_default_impl", 00:24:21.047 "params": { 00:24:21.047 "impl_name": "posix" 00:24:21.047 } 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "method": "sock_impl_set_options", 00:24:21.047 "params": { 00:24:21.047 "impl_name": "ssl", 00:24:21.047 "recv_buf_size": 4096, 00:24:21.047 "send_buf_size": 4096, 00:24:21.047 "enable_recv_pipe": true, 00:24:21.047 "enable_quickack": false, 00:24:21.047 "enable_placement_id": 0, 00:24:21.047 "enable_zerocopy_send_server": true, 00:24:21.047 "enable_zerocopy_send_client": false, 00:24:21.047 "zerocopy_threshold": 0, 00:24:21.047 "tls_version": 0, 00:24:21.047 "enable_ktls": false 00:24:21.047 } 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "method": "sock_impl_set_options", 00:24:21.047 "params": { 00:24:21.047 "impl_name": "posix", 00:24:21.047 "recv_buf_size": 2097152, 00:24:21.047 "send_buf_size": 2097152, 00:24:21.047 "enable_recv_pipe": true, 00:24:21.047 "enable_quickack": false, 00:24:21.047 "enable_placement_id": 0, 00:24:21.047 "enable_zerocopy_send_server": true, 00:24:21.047 "enable_zerocopy_send_client": false, 00:24:21.047 "zerocopy_threshold": 0, 00:24:21.047 "tls_version": 0, 00:24:21.047 "enable_ktls": false 00:24:21.047 } 00:24:21.047 } 00:24:21.047 ] 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "subsystem": "vmd", 00:24:21.047 "config": [] 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "subsystem": "accel", 00:24:21.047 "config": [ 00:24:21.047 { 00:24:21.047 "method": "accel_set_options", 00:24:21.047 "params": { 00:24:21.047 "small_cache_size": 128, 00:24:21.047 "large_cache_size": 16, 00:24:21.047 "task_count": 2048, 00:24:21.047 "sequence_count": 2048, 00:24:21.047 "buf_count": 2048 00:24:21.047 } 00:24:21.047 } 00:24:21.047 ] 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "subsystem": "bdev", 00:24:21.047 "config": [ 00:24:21.047 { 00:24:21.047 "method": "bdev_set_options", 00:24:21.047 "params": { 00:24:21.047 "bdev_io_pool_size": 65535, 00:24:21.047 "bdev_io_cache_size": 256, 00:24:21.047 "bdev_auto_examine": true, 00:24:21.047 "iobuf_small_cache_size": 128, 00:24:21.047 "iobuf_large_cache_size": 16 00:24:21.047 } 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "method": "bdev_raid_set_options", 00:24:21.047 "params": { 00:24:21.047 "process_window_size_kb": 1024, 00:24:21.047 "process_max_bandwidth_mb_sec": 0 00:24:21.047 } 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "method": "bdev_iscsi_set_options", 00:24:21.047 "params": { 00:24:21.047 "timeout_sec": 30 00:24:21.047 } 00:24:21.047 }, 00:24:21.047 { 00:24:21.047 "method": "bdev_nvme_set_options", 00:24:21.047 "params": { 00:24:21.047 "action_on_timeout": "none", 00:24:21.047 "timeout_us": 0, 00:24:21.047 "timeout_admin_us": 0, 00:24:21.047 "keep_alive_timeout_ms": 10000, 00:24:21.047 "arbitration_burst": 0, 00:24:21.048 "low_priority_weight": 0, 00:24:21.048 "medium_priority_weight": 0, 00:24:21.048 "high_priority_weight": 0, 00:24:21.048 "nvme_adminq_poll_period_us": 10000, 00:24:21.048 "nvme_ioq_poll_period_us": 0, 00:24:21.048 "io_queue_requests": 0, 00:24:21.048 "delay_cmd_submit": true, 00:24:21.048 "transport_retry_count": 4, 00:24:21.048 "bdev_retry_count": 3, 00:24:21.048 "transport_ack_timeout": 0, 00:24:21.048 "ctrlr_loss_timeout_sec": 0, 00:24:21.048 "reconnect_delay_sec": 0, 00:24:21.048 "fast_io_fail_timeout_sec": 0, 00:24:21.048 "disable_auto_failback": false, 00:24:21.048 "generate_uuids": false, 00:24:21.048 "transport_tos": 0, 00:24:21.048 "nvme_error_stat": false, 00:24:21.048 "rdma_srq_size": 0, 00:24:21.048 "io_path_stat": false, 00:24:21.048 "allow_accel_sequence": false, 00:24:21.048 "rdma_max_cq_size": 0, 00:24:21.048 "rdma_cm_event_timeout_ms": 0, 00:24:21.048 "dhchap_digests": [ 00:24:21.048 "sha256", 00:24:21.048 "sha384", 00:24:21.048 "sha512" 00:24:21.048 ], 00:24:21.048 "dhchap_dhgroups": [ 00:24:21.048 "null", 00:24:21.048 "ffdhe2048", 00:24:21.048 "ffdhe3072", 00:24:21.048 "ffdhe4096", 00:24:21.048 "ffdhe6144", 00:24:21.048 "ffdhe8192" 00:24:21.048 ] 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "bdev_nvme_set_hotplug", 00:24:21.048 "params": { 00:24:21.048 "period_us": 100000, 00:24:21.048 "enable": false 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "bdev_malloc_create", 00:24:21.048 "params": { 00:24:21.048 "name": "malloc0", 00:24:21.048 "num_blocks": 8192, 00:24:21.048 "block_size": 4096, 00:24:21.048 "physical_block_size": 4096, 00:24:21.048 "uuid": "ab52aa1e-f956-468b-8cbc-51f28bfe7b7d", 00:24:21.048 "optimal_io_boundary": 0, 00:24:21.048 "md_size": 0, 00:24:21.048 "dif_type": 0, 00:24:21.048 "dif_is_head_of_md": false, 00:24:21.048 "dif_pi_format": 0 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "bdev_wait_for_examine" 00:24:21.048 } 00:24:21.048 ] 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "subsystem": "nbd", 00:24:21.048 "config": [] 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "subsystem": "scheduler", 00:24:21.048 "config": [ 00:24:21.048 { 00:24:21.048 "method": "framework_set_scheduler", 00:24:21.048 "params": { 00:24:21.048 "name": "static" 00:24:21.048 } 00:24:21.048 } 00:24:21.048 ] 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "subsystem": "nvmf", 00:24:21.048 "config": [ 00:24:21.048 { 00:24:21.048 "method": "nvmf_set_config", 00:24:21.048 "params": { 00:24:21.048 "discovery_filter": "match_any", 00:24:21.048 "admin_cmd_passthru": { 00:24:21.048 "identify_ctrlr": false 00:24:21.048 }, 00:24:21.048 "dhchap_digests": [ 00:24:21.048 "sha256", 00:24:21.048 "sha384", 00:24:21.048 "sha512" 00:24:21.048 ], 00:24:21.048 "dhchap_dhgroups": [ 00:24:21.048 "null", 00:24:21.048 "ffdhe2048", 00:24:21.048 "ffdhe3072", 00:24:21.048 "ffdhe4096", 00:24:21.048 "ffdhe6144", 00:24:21.048 "ffdhe8192" 00:24:21.048 ] 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_set_max_subsystems", 00:24:21.048 "params": { 00:24:21.048 "max_subsystems": 1024 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_set_crdt", 00:24:21.048 "params": { 00:24:21.048 "crdt1": 0, 00:24:21.048 "crdt2": 0, 00:24:21.048 "crdt3": 0 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_create_transport", 00:24:21.048 "params": { 00:24:21.048 "trtype": "TCP", 00:24:21.048 "max_queue_depth": 128, 00:24:21.048 "max_io_qpairs_per_ctrlr": 127, 00:24:21.048 "in_capsule_data_size": 4096, 00:24:21.048 "max_io_size": 131072, 00:24:21.048 "io_unit_size": 131072, 00:24:21.048 "max_aq_depth": 128, 00:24:21.048 "num_shared_buffers": 511, 00:24:21.048 "buf_cache_size": 4294967295, 00:24:21.048 "dif_insert_or_strip": false, 00:24:21.048 "zcopy": false, 00:24:21.048 "c2h_success": false, 00:24:21.048 "sock_priority": 0, 00:24:21.048 "abort_timeout_sec": 1, 00:24:21.048 "ack_timeout": 0, 00:24:21.048 "data_wr_pool_size": 0 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_create_subsystem", 00:24:21.048 "params": { 00:24:21.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.048 "allow_any_host": false, 00:24:21.048 "serial_number": "SPDK00000000000001", 00:24:21.048 "model_number": "SPDK bdev Controller", 00:24:21.048 "max_namespaces": 10, 00:24:21.048 "min_cntlid": 1, 00:24:21.048 "max_cntlid": 65519, 00:24:21.048 "ana_reporting": false 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_subsystem_add_host", 00:24:21.048 "params": { 00:24:21.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.048 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.048 "psk": "key0" 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_subsystem_add_ns", 00:24:21.048 "params": { 00:24:21.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.048 "namespace": { 00:24:21.048 "nsid": 1, 00:24:21.048 "bdev_name": "malloc0", 00:24:21.048 "nguid": "AB52AA1EF956468B8CBC51F28BFE7B7D", 00:24:21.048 "uuid": "ab52aa1e-f956-468b-8cbc-51f28bfe7b7d", 00:24:21.048 "no_auto_visible": false 00:24:21.048 } 00:24:21.048 } 00:24:21.048 }, 00:24:21.048 { 00:24:21.048 "method": "nvmf_subsystem_add_listener", 00:24:21.048 "params": { 00:24:21.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.048 "listen_address": { 00:24:21.048 "trtype": "TCP", 00:24:21.048 "adrfam": "IPv4", 00:24:21.048 "traddr": "10.0.0.2", 00:24:21.048 "trsvcid": "4420" 00:24:21.048 }, 00:24:21.048 "secure_channel": true 00:24:21.048 } 00:24:21.048 } 00:24:21.048 ] 00:24:21.048 } 00:24:21.048 ] 00:24:21.048 }' 00:24:21.048 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:21.310 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:21.310 "subsystems": [ 00:24:21.310 { 00:24:21.310 "subsystem": "keyring", 00:24:21.310 "config": [ 00:24:21.310 { 00:24:21.310 "method": "keyring_file_add_key", 00:24:21.310 "params": { 00:24:21.310 "name": "key0", 00:24:21.310 "path": "/tmp/tmp.sD2gyi45yI" 00:24:21.310 } 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "iobuf", 00:24:21.310 "config": [ 00:24:21.310 { 00:24:21.310 "method": "iobuf_set_options", 00:24:21.310 "params": { 00:24:21.310 "small_pool_count": 8192, 00:24:21.310 "large_pool_count": 1024, 00:24:21.310 "small_bufsize": 8192, 00:24:21.310 "large_bufsize": 135168 00:24:21.310 } 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "sock", 00:24:21.310 "config": [ 00:24:21.310 { 00:24:21.310 "method": "sock_set_default_impl", 00:24:21.310 "params": { 00:24:21.310 "impl_name": "posix" 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "sock_impl_set_options", 00:24:21.310 "params": { 00:24:21.310 "impl_name": "ssl", 00:24:21.310 "recv_buf_size": 4096, 00:24:21.310 "send_buf_size": 4096, 00:24:21.310 "enable_recv_pipe": true, 00:24:21.310 "enable_quickack": false, 00:24:21.310 "enable_placement_id": 0, 00:24:21.310 "enable_zerocopy_send_server": true, 00:24:21.310 "enable_zerocopy_send_client": false, 00:24:21.310 "zerocopy_threshold": 0, 00:24:21.310 "tls_version": 0, 00:24:21.310 "enable_ktls": false 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "sock_impl_set_options", 00:24:21.310 "params": { 00:24:21.310 "impl_name": "posix", 00:24:21.310 "recv_buf_size": 2097152, 00:24:21.310 "send_buf_size": 2097152, 00:24:21.310 "enable_recv_pipe": true, 00:24:21.310 "enable_quickack": false, 00:24:21.310 "enable_placement_id": 0, 00:24:21.310 "enable_zerocopy_send_server": true, 00:24:21.310 "enable_zerocopy_send_client": false, 00:24:21.310 "zerocopy_threshold": 0, 00:24:21.310 "tls_version": 0, 00:24:21.310 "enable_ktls": false 00:24:21.310 } 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "vmd", 00:24:21.310 "config": [] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "accel", 00:24:21.310 "config": [ 00:24:21.310 { 00:24:21.310 "method": "accel_set_options", 00:24:21.310 "params": { 00:24:21.310 "small_cache_size": 128, 00:24:21.310 "large_cache_size": 16, 00:24:21.310 "task_count": 2048, 00:24:21.310 "sequence_count": 2048, 00:24:21.310 "buf_count": 2048 00:24:21.310 } 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "bdev", 00:24:21.310 "config": [ 00:24:21.310 { 00:24:21.310 "method": "bdev_set_options", 00:24:21.310 "params": { 00:24:21.310 "bdev_io_pool_size": 65535, 00:24:21.310 "bdev_io_cache_size": 256, 00:24:21.310 "bdev_auto_examine": true, 00:24:21.310 "iobuf_small_cache_size": 128, 00:24:21.310 "iobuf_large_cache_size": 16 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_raid_set_options", 00:24:21.310 "params": { 00:24:21.310 "process_window_size_kb": 1024, 00:24:21.310 "process_max_bandwidth_mb_sec": 0 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_iscsi_set_options", 00:24:21.310 "params": { 00:24:21.310 "timeout_sec": 30 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_nvme_set_options", 00:24:21.310 "params": { 00:24:21.310 "action_on_timeout": "none", 00:24:21.310 "timeout_us": 0, 00:24:21.310 "timeout_admin_us": 0, 00:24:21.310 "keep_alive_timeout_ms": 10000, 00:24:21.310 "arbitration_burst": 0, 00:24:21.310 "low_priority_weight": 0, 00:24:21.310 "medium_priority_weight": 0, 00:24:21.310 "high_priority_weight": 0, 00:24:21.310 "nvme_adminq_poll_period_us": 10000, 00:24:21.310 "nvme_ioq_poll_period_us": 0, 00:24:21.310 "io_queue_requests": 512, 00:24:21.310 "delay_cmd_submit": true, 00:24:21.310 "transport_retry_count": 4, 00:24:21.310 "bdev_retry_count": 3, 00:24:21.310 "transport_ack_timeout": 0, 00:24:21.310 "ctrlr_loss_timeout_sec": 0, 00:24:21.310 "reconnect_delay_sec": 0, 00:24:21.310 "fast_io_fail_timeout_sec": 0, 00:24:21.310 "disable_auto_failback": false, 00:24:21.310 "generate_uuids": false, 00:24:21.310 "transport_tos": 0, 00:24:21.310 "nvme_error_stat": false, 00:24:21.310 "rdma_srq_size": 0, 00:24:21.310 "io_path_stat": false, 00:24:21.310 "allow_accel_sequence": false, 00:24:21.310 "rdma_max_cq_size": 0, 00:24:21.310 "rdma_cm_event_timeout_ms": 0, 00:24:21.310 "dhchap_digests": [ 00:24:21.310 "sha256", 00:24:21.310 "sha384", 00:24:21.310 "sha512" 00:24:21.310 ], 00:24:21.310 "dhchap_dhgroups": [ 00:24:21.310 "null", 00:24:21.310 "ffdhe2048", 00:24:21.310 "ffdhe3072", 00:24:21.310 "ffdhe4096", 00:24:21.310 "ffdhe6144", 00:24:21.310 "ffdhe8192" 00:24:21.310 ] 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_nvme_attach_controller", 00:24:21.310 "params": { 00:24:21.310 "name": "TLSTEST", 00:24:21.310 "trtype": "TCP", 00:24:21.310 "adrfam": "IPv4", 00:24:21.310 "traddr": "10.0.0.2", 00:24:21.310 "trsvcid": "4420", 00:24:21.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.310 "prchk_reftag": false, 00:24:21.310 "prchk_guard": false, 00:24:21.310 "ctrlr_loss_timeout_sec": 0, 00:24:21.310 "reconnect_delay_sec": 0, 00:24:21.310 "fast_io_fail_timeout_sec": 0, 00:24:21.310 "psk": "key0", 00:24:21.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.310 "hdgst": false, 00:24:21.310 "ddgst": false 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_nvme_set_hotplug", 00:24:21.310 "params": { 00:24:21.310 "period_us": 100000, 00:24:21.310 "enable": false 00:24:21.310 } 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "method": "bdev_wait_for_examine" 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }, 00:24:21.310 { 00:24:21.310 "subsystem": "nbd", 00:24:21.310 "config": [] 00:24:21.310 } 00:24:21.310 ] 00:24:21.310 }' 00:24:21.310 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 732535 00:24:21.310 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 732535 ']' 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 732535 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 732535 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 732535' 00:24:21.311 killing process with pid 732535 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 732535 00:24:21.311 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.311 00:24:21.311 Latency(us) 00:24:21.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.311 =================================================================================================================== 00:24:21.311 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 732535 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 732060 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 732060 ']' 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 732060 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.311 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 732060 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 732060' 00:24:21.573 killing process with pid 732060 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 732060 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 732060 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.573 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:21.573 "subsystems": [ 00:24:21.573 { 00:24:21.573 "subsystem": "keyring", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "keyring_file_add_key", 00:24:21.573 "params": { 00:24:21.573 "name": "key0", 00:24:21.573 "path": "/tmp/tmp.sD2gyi45yI" 00:24:21.573 } 00:24:21.573 } 00:24:21.573 ] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "iobuf", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "iobuf_set_options", 00:24:21.573 "params": { 00:24:21.573 "small_pool_count": 8192, 00:24:21.573 "large_pool_count": 1024, 00:24:21.573 "small_bufsize": 8192, 00:24:21.573 "large_bufsize": 135168 00:24:21.573 } 00:24:21.573 } 00:24:21.573 ] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "sock", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "sock_set_default_impl", 00:24:21.573 "params": { 00:24:21.573 "impl_name": "posix" 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "sock_impl_set_options", 00:24:21.573 "params": { 00:24:21.573 "impl_name": "ssl", 00:24:21.573 "recv_buf_size": 4096, 00:24:21.573 "send_buf_size": 4096, 00:24:21.573 "enable_recv_pipe": true, 00:24:21.573 "enable_quickack": false, 00:24:21.573 "enable_placement_id": 0, 00:24:21.573 "enable_zerocopy_send_server": true, 00:24:21.573 "enable_zerocopy_send_client": false, 00:24:21.573 "zerocopy_threshold": 0, 00:24:21.573 "tls_version": 0, 00:24:21.573 "enable_ktls": false 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "sock_impl_set_options", 00:24:21.573 "params": { 00:24:21.573 "impl_name": "posix", 00:24:21.573 "recv_buf_size": 2097152, 00:24:21.573 "send_buf_size": 2097152, 00:24:21.573 "enable_recv_pipe": true, 00:24:21.573 "enable_quickack": false, 00:24:21.573 "enable_placement_id": 0, 00:24:21.573 "enable_zerocopy_send_server": true, 00:24:21.573 "enable_zerocopy_send_client": false, 00:24:21.573 "zerocopy_threshold": 0, 00:24:21.573 "tls_version": 0, 00:24:21.573 "enable_ktls": false 00:24:21.573 } 00:24:21.573 } 00:24:21.573 ] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "vmd", 00:24:21.573 "config": [] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "accel", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "accel_set_options", 00:24:21.573 "params": { 00:24:21.573 "small_cache_size": 128, 00:24:21.573 "large_cache_size": 16, 00:24:21.573 "task_count": 2048, 00:24:21.573 "sequence_count": 2048, 00:24:21.573 "buf_count": 2048 00:24:21.573 } 00:24:21.573 } 00:24:21.573 ] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "bdev", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "bdev_set_options", 00:24:21.573 "params": { 00:24:21.573 "bdev_io_pool_size": 65535, 00:24:21.573 "bdev_io_cache_size": 256, 00:24:21.573 "bdev_auto_examine": true, 00:24:21.573 "iobuf_small_cache_size": 128, 00:24:21.573 "iobuf_large_cache_size": 16 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_raid_set_options", 00:24:21.573 "params": { 00:24:21.573 "process_window_size_kb": 1024, 00:24:21.573 "process_max_bandwidth_mb_sec": 0 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_iscsi_set_options", 00:24:21.573 "params": { 00:24:21.573 "timeout_sec": 30 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_nvme_set_options", 00:24:21.573 "params": { 00:24:21.573 "action_on_timeout": "none", 00:24:21.573 "timeout_us": 0, 00:24:21.573 "timeout_admin_us": 0, 00:24:21.573 "keep_alive_timeout_ms": 10000, 00:24:21.573 "arbitration_burst": 0, 00:24:21.573 "low_priority_weight": 0, 00:24:21.573 "medium_priority_weight": 0, 00:24:21.573 "high_priority_weight": 0, 00:24:21.573 "nvme_adminq_poll_period_us": 10000, 00:24:21.573 "nvme_ioq_poll_period_us": 0, 00:24:21.573 "io_queue_requests": 0, 00:24:21.573 "delay_cmd_submit": true, 00:24:21.573 "transport_retry_count": 4, 00:24:21.573 "bdev_retry_count": 3, 00:24:21.573 "transport_ack_timeout": 0, 00:24:21.573 "ctrlr_loss_timeout_sec": 0, 00:24:21.573 "reconnect_delay_sec": 0, 00:24:21.573 "fast_io_fail_timeout_sec": 0, 00:24:21.573 "disable_auto_failback": false, 00:24:21.573 "generate_uuids": false, 00:24:21.573 "transport_tos": 0, 00:24:21.573 "nvme_error_stat": false, 00:24:21.573 "rdma_srq_size": 0, 00:24:21.573 "io_path_stat": false, 00:24:21.573 "allow_accel_sequence": false, 00:24:21.573 "rdma_max_cq_size": 0, 00:24:21.573 "rdma_cm_event_timeout_ms": 0, 00:24:21.573 "dhchap_digests": [ 00:24:21.573 "sha256", 00:24:21.573 "sha384", 00:24:21.573 "sha512" 00:24:21.573 ], 00:24:21.573 "dhchap_dhgroups": [ 00:24:21.573 "null", 00:24:21.573 "ffdhe2048", 00:24:21.573 "ffdhe3072", 00:24:21.573 "ffdhe4096", 00:24:21.573 "ffdhe6144", 00:24:21.573 "ffdhe8192" 00:24:21.573 ] 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_nvme_set_hotplug", 00:24:21.573 "params": { 00:24:21.573 "period_us": 100000, 00:24:21.573 "enable": false 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_malloc_create", 00:24:21.573 "params": { 00:24:21.573 "name": "malloc0", 00:24:21.573 "num_blocks": 8192, 00:24:21.573 "block_size": 4096, 00:24:21.573 "physical_block_size": 4096, 00:24:21.573 "uuid": "ab52aa1e-f956-468b-8cbc-51f28bfe7b7d", 00:24:21.573 "optimal_io_boundary": 0, 00:24:21.573 "md_size": 0, 00:24:21.573 "dif_type": 0, 00:24:21.573 "dif_is_head_of_md": false, 00:24:21.573 "dif_pi_format": 0 00:24:21.573 } 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "method": "bdev_wait_for_examine" 00:24:21.573 } 00:24:21.573 ] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "nbd", 00:24:21.573 "config": [] 00:24:21.573 }, 00:24:21.573 { 00:24:21.573 "subsystem": "scheduler", 00:24:21.573 "config": [ 00:24:21.573 { 00:24:21.573 "method": "framework_set_scheduler", 00:24:21.573 "params": { 00:24:21.573 "name": "static" 00:24:21.574 } 00:24:21.574 } 00:24:21.574 ] 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "subsystem": "nvmf", 00:24:21.574 "config": [ 00:24:21.574 { 00:24:21.574 "method": "nvmf_set_config", 00:24:21.574 "params": { 00:24:21.574 "discovery_filter": "match_any", 00:24:21.574 "admin_cmd_passthru": { 00:24:21.574 "identify_ctrlr": false 00:24:21.574 }, 00:24:21.574 "dhchap_digests": [ 00:24:21.574 "sha256", 00:24:21.574 "sha384", 00:24:21.574 "sha512" 00:24:21.574 ], 00:24:21.574 "dhchap_dhgroups": [ 00:24:21.574 "null", 00:24:21.574 "ffdhe2048", 00:24:21.574 "ffdhe3072", 00:24:21.574 "ffdhe4096", 00:24:21.574 "ffdhe6144", 00:24:21.574 "ffdhe8192" 00:24:21.574 ] 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_set_max_subsystems", 00:24:21.574 "params": { 00:24:21.574 "max_subsystems": 1024 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_set_crdt", 00:24:21.574 "params": { 00:24:21.574 "crdt1": 0, 00:24:21.574 "crdt2": 0, 00:24:21.574 "crdt3": 0 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_create_transport", 00:24:21.574 "params": { 00:24:21.574 "trtype": "TCP", 00:24:21.574 "max_queue_depth": 128, 00:24:21.574 "max_io_qpairs_per_ctrlr": 127, 00:24:21.574 "in_capsule_data_size": 4096, 00:24:21.574 "max_io_size": 131072, 00:24:21.574 "io_unit_size": 131072, 00:24:21.574 "max_aq_depth": 128, 00:24:21.574 "num_shared_buffers": 511, 00:24:21.574 "buf_cache_size": 4294967295, 00:24:21.574 "dif_insert_or_strip": false, 00:24:21.574 "zcopy": false, 00:24:21.574 "c2h_success": false, 00:24:21.574 "sock_priority": 0, 00:24:21.574 "abort_timeout_sec": 1, 00:24:21.574 "ack_timeout": 0, 00:24:21.574 "data_wr_pool_size": 0 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_create_subsystem", 00:24:21.574 "params": { 00:24:21.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.574 "allow_any_host": false, 00:24:21.574 "serial_number": "SPDK00000000000001", 00:24:21.574 "model_number": "SPDK bdev Controller", 00:24:21.574 "max_namespaces": 10, 00:24:21.574 "min_cntlid": 1, 00:24:21.574 "max_cntlid": 65519, 00:24:21.574 "ana_reporting": false 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_subsystem_add_host", 00:24:21.574 "params": { 00:24:21.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.574 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.574 "psk": "key0" 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_subsystem_add_ns", 00:24:21.574 "params": { 00:24:21.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.574 "namespace": { 00:24:21.574 "nsid": 1, 00:24:21.574 "bdev_name": "malloc0", 00:24:21.574 "nguid": "AB52AA1EF956468B8CBC51F28BFE7B7D", 00:24:21.574 "uuid": "ab52aa1e-f956-468b-8cbc-51f28bfe7b7d", 00:24:21.574 "no_auto_visible": false 00:24:21.574 } 00:24:21.574 } 00:24:21.574 }, 00:24:21.574 { 00:24:21.574 "method": "nvmf_subsystem_add_listener", 00:24:21.574 "params": { 00:24:21.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.574 "listen_address": { 00:24:21.574 "trtype": "TCP", 00:24:21.574 "adrfam": "IPv4", 00:24:21.574 "traddr": "10.0.0.2", 00:24:21.574 "trsvcid": "4420" 00:24:21.574 }, 00:24:21.574 "secure_channel": true 00:24:21.574 } 00:24:21.574 } 00:24:21.574 ] 00:24:21.574 } 00:24:21.574 ] 00:24:21.574 }' 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=732895 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 732895 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 732895 ']' 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.574 22:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.574 [2024-09-30 22:52:48.535970] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:21.574 [2024-09-30 22:52:48.536023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.836 [2024-09-30 22:52:48.618869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.836 [2024-09-30 22:52:48.676419] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.836 [2024-09-30 22:52:48.676456] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.836 [2024-09-30 22:52:48.676461] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.836 [2024-09-30 22:52:48.676466] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.836 [2024-09-30 22:52:48.676470] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.836 [2024-09-30 22:52:48.676523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.096 [2024-09-30 22:52:48.880906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.096 [2024-09-30 22:52:48.912932] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.096 [2024-09-30 22:52:48.913141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.356 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.356 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:22.356 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:22.356 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.356 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=733091 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 733091 /var/tmp/bdevperf.sock 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 733091 ']' 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 22:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:22.617 "subsystems": [ 00:24:22.617 { 00:24:22.617 "subsystem": "keyring", 00:24:22.617 "config": [ 00:24:22.617 { 00:24:22.617 "method": "keyring_file_add_key", 00:24:22.617 "params": { 00:24:22.617 "name": "key0", 00:24:22.617 "path": "/tmp/tmp.sD2gyi45yI" 00:24:22.617 } 00:24:22.617 } 00:24:22.617 ] 00:24:22.617 }, 00:24:22.617 { 00:24:22.617 "subsystem": "iobuf", 00:24:22.617 "config": [ 00:24:22.617 { 00:24:22.617 "method": "iobuf_set_options", 00:24:22.617 "params": { 00:24:22.617 "small_pool_count": 8192, 00:24:22.617 "large_pool_count": 1024, 00:24:22.617 "small_bufsize": 8192, 00:24:22.617 "large_bufsize": 135168 00:24:22.617 } 00:24:22.617 } 00:24:22.617 ] 00:24:22.617 }, 00:24:22.617 { 00:24:22.617 "subsystem": "sock", 00:24:22.617 "config": [ 00:24:22.617 { 00:24:22.617 "method": "sock_set_default_impl", 00:24:22.617 "params": { 00:24:22.617 "impl_name": "posix" 00:24:22.617 } 00:24:22.617 }, 00:24:22.617 { 00:24:22.617 "method": "sock_impl_set_options", 00:24:22.617 "params": { 00:24:22.617 "impl_name": "ssl", 00:24:22.617 "recv_buf_size": 4096, 00:24:22.617 "send_buf_size": 4096, 00:24:22.617 "enable_recv_pipe": true, 00:24:22.617 "enable_quickack": false, 00:24:22.617 "enable_placement_id": 0, 00:24:22.617 "enable_zerocopy_send_server": true, 00:24:22.617 "enable_zerocopy_send_client": false, 00:24:22.617 "zerocopy_threshold": 0, 00:24:22.617 "tls_version": 0, 00:24:22.617 "enable_ktls": false 00:24:22.617 } 00:24:22.617 }, 00:24:22.617 { 00:24:22.617 "method": "sock_impl_set_options", 00:24:22.617 "params": { 00:24:22.617 "impl_name": "posix", 00:24:22.617 "recv_buf_size": 2097152, 00:24:22.617 "send_buf_size": 2097152, 00:24:22.617 "enable_recv_pipe": true, 00:24:22.617 "enable_quickack": false, 00:24:22.617 "enable_placement_id": 0, 00:24:22.617 "enable_zerocopy_send_server": true, 00:24:22.617 "enable_zerocopy_send_client": false, 00:24:22.617 "zerocopy_threshold": 0, 00:24:22.617 "tls_version": 0, 00:24:22.617 "enable_ktls": false 00:24:22.617 } 00:24:22.617 } 00:24:22.617 ] 00:24:22.617 }, 00:24:22.617 { 00:24:22.617 "subsystem": "vmd", 00:24:22.617 "config": [] 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "subsystem": "accel", 00:24:22.618 "config": [ 00:24:22.618 { 00:24:22.618 "method": "accel_set_options", 00:24:22.618 "params": { 00:24:22.618 "small_cache_size": 128, 00:24:22.618 "large_cache_size": 16, 00:24:22.618 "task_count": 2048, 00:24:22.618 "sequence_count": 2048, 00:24:22.618 "buf_count": 2048 00:24:22.618 } 00:24:22.618 } 00:24:22.618 ] 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "subsystem": "bdev", 00:24:22.618 "config": [ 00:24:22.618 { 00:24:22.618 "method": "bdev_set_options", 00:24:22.618 "params": { 00:24:22.618 "bdev_io_pool_size": 65535, 00:24:22.618 "bdev_io_cache_size": 256, 00:24:22.618 "bdev_auto_examine": true, 00:24:22.618 "iobuf_small_cache_size": 128, 00:24:22.618 "iobuf_large_cache_size": 16 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_raid_set_options", 00:24:22.618 "params": { 00:24:22.618 "process_window_size_kb": 1024, 00:24:22.618 "process_max_bandwidth_mb_sec": 0 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_iscsi_set_options", 00:24:22.618 "params": { 00:24:22.618 "timeout_sec": 30 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_nvme_set_options", 00:24:22.618 "params": { 00:24:22.618 "action_on_timeout": "none", 00:24:22.618 "timeout_us": 0, 00:24:22.618 "timeout_admin_us": 0, 00:24:22.618 "keep_alive_timeout_ms": 10000, 00:24:22.618 "arbitration_burst": 0, 00:24:22.618 "low_priority_weight": 0, 00:24:22.618 "medium_priority_weight": 0, 00:24:22.618 "high_priority_weight": 0, 00:24:22.618 "nvme_adminq_poll_period_us": 10000, 00:24:22.618 "nvme_ioq_poll_period_us": 0, 00:24:22.618 "io_queue_requests": 512, 00:24:22.618 "delay_cmd_submit": true, 00:24:22.618 "transport_retry_count": 4, 00:24:22.618 "bdev_retry_count": 3, 00:24:22.618 "transport_ack_timeout": 0, 00:24:22.618 "ctrlr_loss_timeout_sec": 0, 00:24:22.618 "reconnect_delay_sec": 0, 00:24:22.618 "fast_io_fail_timeout_sec": 0, 00:24:22.618 "disable_auto_failback": false, 00:24:22.618 "generate_uuids": false, 00:24:22.618 "transport_tos": 0, 00:24:22.618 "nvme_error_stat": false, 00:24:22.618 "rdma_srq_size": 0, 00:24:22.618 "io_path_stat": false, 00:24:22.618 "allow_accel_sequence": false, 00:24:22.618 "rdma_max_cq_size": 0, 00:24:22.618 "rdma_cm_event_timeout_ms": 0, 00:24:22.618 "dhchap_digests": [ 00:24:22.618 "sha256", 00:24:22.618 "sha384", 00:24:22.618 "sha512" 00:24:22.618 ], 00:24:22.618 "dhchap_dhgroups": [ 00:24:22.618 "null", 00:24:22.618 "ffdhe2048", 00:24:22.618 "ffdhe3072", 00:24:22.618 "ffdhe4096", 00:24:22.618 "ffdhe6144", 00:24:22.618 "ffdhe8192" 00:24:22.618 ] 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_nvme_attach_controller", 00:24:22.618 "params": { 00:24:22.618 "name": "TLSTEST", 00:24:22.618 "trtype": "TCP", 00:24:22.618 "adrfam": "IPv4", 00:24:22.618 "traddr": "10.0.0.2", 00:24:22.618 "trsvcid": "4420", 00:24:22.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.618 "prchk_reftag": false, 00:24:22.618 "prchk_guard": false, 00:24:22.618 "ctrlr_loss_timeout_sec": 0, 00:24:22.618 "reconnect_delay_sec": 0, 00:24:22.618 "fast_io_fail_timeout_sec": 0, 00:24:22.618 "psk": "key0", 00:24:22.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.618 "hdgst": false, 00:24:22.618 "ddgst": false 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_nvme_set_hotplug", 00:24:22.618 "params": { 00:24:22.618 "period_us": 100000, 00:24:22.618 "enable": false 00:24:22.618 } 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "method": "bdev_wait_for_examine" 00:24:22.618 } 00:24:22.618 ] 00:24:22.618 }, 00:24:22.618 { 00:24:22.618 "subsystem": "nbd", 00:24:22.618 "config": [] 00:24:22.618 } 00:24:22.618 ] 00:24:22.618 }' 00:24:22.618 [2024-09-30 22:52:49.425073] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:22.618 [2024-09-30 22:52:49.425125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733091 ] 00:24:22.618 [2024-09-30 22:52:49.502764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.618 [2024-09-30 22:52:49.555174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.879 [2024-09-30 22:52:49.688918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.450 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.450 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:23.450 22:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:23.450 Running I/O for 10 seconds... 00:24:33.747 6151.00 IOPS, 24.03 MiB/s 6196.00 IOPS, 24.20 MiB/s 6209.00 IOPS, 24.25 MiB/s 6212.00 IOPS, 24.27 MiB/s 6217.40 IOPS, 24.29 MiB/s 6236.33 IOPS, 24.36 MiB/s 6211.29 IOPS, 24.26 MiB/s 6206.38 IOPS, 24.24 MiB/s 6217.11 IOPS, 24.29 MiB/s 6216.70 IOPS, 24.28 MiB/s 00:24:33.747 Latency(us) 00:24:33.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.747 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.747 Verification LBA range: start 0x0 length 0x2000 00:24:33.747 TLSTESTn1 : 10.01 6221.77 24.30 0.00 0.00 20542.70 3877.55 18568.53 00:24:33.747 =================================================================================================================== 00:24:33.747 Total : 6221.77 24.30 0.00 0.00 20542.70 3877.55 18568.53 00:24:33.747 { 00:24:33.747 "results": [ 00:24:33.747 { 00:24:33.747 "job": "TLSTESTn1", 00:24:33.747 "core_mask": "0x4", 00:24:33.747 "workload": "verify", 00:24:33.747 "status": "finished", 00:24:33.747 "verify_range": { 00:24:33.747 "start": 0, 00:24:33.747 "length": 8192 00:24:33.747 }, 00:24:33.747 "queue_depth": 128, 00:24:33.747 "io_size": 4096, 00:24:33.747 "runtime": 10.012268, 00:24:33.747 "iops": 6221.76713607746, 00:24:33.747 "mibps": 24.30377787530258, 00:24:33.747 "io_failed": 0, 00:24:33.747 "io_timeout": 0, 00:24:33.747 "avg_latency_us": 20542.70201389112, 00:24:33.747 "min_latency_us": 3877.5466666666666, 00:24:33.747 "max_latency_us": 18568.533333333333 00:24:33.747 } 00:24:33.747 ], 00:24:33.747 "core_count": 1 00:24:33.747 } 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 733091 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 733091 ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 733091 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 733091 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 733091' 00:24:33.747 killing process with pid 733091 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 733091 00:24:33.747 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.747 00:24:33.747 Latency(us) 00:24:33.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.747 =================================================================================================================== 00:24:33.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 733091 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 732895 ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 732895' 00:24:33.747 killing process with pid 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 732895 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=735266 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 735266 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 735266 ']' 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.747 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.009 [2024-09-30 22:53:00.803162] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:34.009 [2024-09-30 22:53:00.803226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.009 [2024-09-30 22:53:00.887679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.009 [2024-09-30 22:53:00.981094] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.009 [2024-09-30 22:53:00.981156] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.009 [2024-09-30 22:53:00.981165] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.009 [2024-09-30 22:53:00.981173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.009 [2024-09-30 22:53:00.981179] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.009 [2024-09-30 22:53:00.981212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.sD2gyi45yI 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sD2gyi45yI 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.952 [2024-09-30 22:53:01.804740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.952 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.213 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.213 [2024-09-30 22:53:02.157619] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.213 [2024-09-30 22:53:02.157972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.213 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.475 malloc0 00:24:35.475 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:35.735 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:35.735 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=735633 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 735633 /var/tmp/bdevperf.sock 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 735633 ']' 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.997 22:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.997 [2024-09-30 22:53:02.948376] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:35.997 [2024-09-30 22:53:02.948450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735633 ] 00:24:36.258 [2024-09-30 22:53:03.031420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.258 [2024-09-30 22:53:03.093654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.827 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.827 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:36.828 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:37.087 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:37.087 [2024-09-30 22:53:04.062766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.352 nvme0n1 00:24:37.352 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.352 Running I/O for 1 seconds... 00:24:38.374 4084.00 IOPS, 15.95 MiB/s 00:24:38.374 Latency(us) 00:24:38.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.374 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:38.374 Verification LBA range: start 0x0 length 0x2000 00:24:38.374 nvme0n1 : 1.02 4138.91 16.17 0.00 0.00 30714.62 5898.24 52865.71 00:24:38.374 =================================================================================================================== 00:24:38.374 Total : 4138.91 16.17 0.00 0.00 30714.62 5898.24 52865.71 00:24:38.374 { 00:24:38.374 "results": [ 00:24:38.374 { 00:24:38.374 "job": "nvme0n1", 00:24:38.374 "core_mask": "0x2", 00:24:38.374 "workload": "verify", 00:24:38.374 "status": "finished", 00:24:38.374 "verify_range": { 00:24:38.374 "start": 0, 00:24:38.374 "length": 8192 00:24:38.374 }, 00:24:38.374 "queue_depth": 128, 00:24:38.374 "io_size": 4096, 00:24:38.374 "runtime": 1.017659, 00:24:38.374 "iops": 4138.9109711602805, 00:24:38.374 "mibps": 16.167620981094846, 00:24:38.374 "io_failed": 0, 00:24:38.374 "io_timeout": 0, 00:24:38.374 "avg_latency_us": 30714.6190566635, 00:24:38.374 "min_latency_us": 5898.24, 00:24:38.374 "max_latency_us": 52865.706666666665 00:24:38.374 } 00:24:38.374 ], 00:24:38.374 "core_count": 1 00:24:38.374 } 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 735633 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 735633 ']' 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 735633 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 735633 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 735633' 00:24:38.374 killing process with pid 735633 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 735633 00:24:38.374 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.374 00:24:38.374 Latency(us) 00:24:38.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.374 =================================================================================================================== 00:24:38.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.374 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 735633 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 735266 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 735266 ']' 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 735266 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 735266 00:24:38.634 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.635 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.635 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 735266' 00:24:38.635 killing process with pid 735266 00:24:38.635 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 735266 00:24:38.635 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 735266 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=736309 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 736309 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 736309 ']' 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.895 22:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.895 [2024-09-30 22:53:05.731583] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:38.895 [2024-09-30 22:53:05.731643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.895 [2024-09-30 22:53:05.812337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.895 [2024-09-30 22:53:05.866980] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.895 [2024-09-30 22:53:05.867013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.895 [2024-09-30 22:53:05.867019] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.895 [2024-09-30 22:53:05.867024] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.895 [2024-09-30 22:53:05.867028] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.895 [2024-09-30 22:53:05.867042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.837 [2024-09-30 22:53:06.558542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.837 malloc0 00:24:39.837 [2024-09-30 22:53:06.601386] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.837 [2024-09-30 22:53:06.601619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=736445 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 736445 /var/tmp/bdevperf.sock 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 736445 ']' 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.837 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.837 [2024-09-30 22:53:06.680400] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:39.837 [2024-09-30 22:53:06.680449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736445 ] 00:24:39.837 [2024-09-30 22:53:06.757413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.837 [2024-09-30 22:53:06.810828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.777 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.777 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:40.777 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sD2gyi45yI 00:24:40.777 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:40.777 [2024-09-30 22:53:07.770494] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.038 nvme0n1 00:24:41.038 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.038 Running I/O for 1 seconds... 00:24:41.978 5561.00 IOPS, 21.72 MiB/s 00:24:41.978 Latency(us) 00:24:41.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.978 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:41.978 Verification LBA range: start 0x0 length 0x2000 00:24:41.978 nvme0n1 : 1.01 5618.54 21.95 0.00 0.00 22638.08 4915.20 23920.64 00:24:41.978 =================================================================================================================== 00:24:41.978 Total : 5618.54 21.95 0.00 0.00 22638.08 4915.20 23920.64 00:24:41.978 { 00:24:41.978 "results": [ 00:24:41.978 { 00:24:41.978 "job": "nvme0n1", 00:24:41.978 "core_mask": "0x2", 00:24:41.979 "workload": "verify", 00:24:41.979 "status": "finished", 00:24:41.979 "verify_range": { 00:24:41.979 "start": 0, 00:24:41.979 "length": 8192 00:24:41.979 }, 00:24:41.979 "queue_depth": 128, 00:24:41.979 "io_size": 4096, 00:24:41.979 "runtime": 1.012541, 00:24:41.979 "iops": 5618.5379159955, 00:24:41.979 "mibps": 21.947413734357422, 00:24:41.979 "io_failed": 0, 00:24:41.979 "io_timeout": 0, 00:24:41.979 "avg_latency_us": 22638.08344993262, 00:24:41.979 "min_latency_us": 4915.2, 00:24:41.979 "max_latency_us": 23920.64 00:24:41.979 } 00:24:41.979 ], 00:24:41.979 "core_count": 1 00:24:41.979 } 00:24:41.979 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:41.979 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.979 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.238 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.238 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:42.238 "subsystems": [ 00:24:42.238 { 00:24:42.238 "subsystem": "keyring", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "keyring_file_add_key", 00:24:42.238 "params": { 00:24:42.238 "name": "key0", 00:24:42.238 "path": "/tmp/tmp.sD2gyi45yI" 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "iobuf", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "iobuf_set_options", 00:24:42.238 "params": { 00:24:42.238 "small_pool_count": 8192, 00:24:42.238 "large_pool_count": 1024, 00:24:42.238 "small_bufsize": 8192, 00:24:42.238 "large_bufsize": 135168 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "sock", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "sock_set_default_impl", 00:24:42.238 "params": { 00:24:42.238 "impl_name": "posix" 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "sock_impl_set_options", 00:24:42.238 "params": { 00:24:42.238 "impl_name": "ssl", 00:24:42.238 "recv_buf_size": 4096, 00:24:42.238 "send_buf_size": 4096, 00:24:42.238 "enable_recv_pipe": true, 00:24:42.238 "enable_quickack": false, 00:24:42.238 "enable_placement_id": 0, 00:24:42.238 "enable_zerocopy_send_server": true, 00:24:42.238 "enable_zerocopy_send_client": false, 00:24:42.238 "zerocopy_threshold": 0, 00:24:42.238 "tls_version": 0, 00:24:42.238 "enable_ktls": false 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "sock_impl_set_options", 00:24:42.238 "params": { 00:24:42.238 "impl_name": "posix", 00:24:42.238 "recv_buf_size": 2097152, 00:24:42.238 "send_buf_size": 2097152, 00:24:42.238 "enable_recv_pipe": true, 00:24:42.238 "enable_quickack": false, 00:24:42.238 "enable_placement_id": 0, 00:24:42.238 "enable_zerocopy_send_server": true, 00:24:42.238 "enable_zerocopy_send_client": false, 00:24:42.238 "zerocopy_threshold": 0, 00:24:42.238 "tls_version": 0, 00:24:42.238 "enable_ktls": false 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "vmd", 00:24:42.238 "config": [] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "accel", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "accel_set_options", 00:24:42.238 "params": { 00:24:42.238 "small_cache_size": 128, 00:24:42.238 "large_cache_size": 16, 00:24:42.238 "task_count": 2048, 00:24:42.238 "sequence_count": 2048, 00:24:42.238 "buf_count": 2048 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "bdev", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "bdev_set_options", 00:24:42.238 "params": { 00:24:42.238 "bdev_io_pool_size": 65535, 00:24:42.238 "bdev_io_cache_size": 256, 00:24:42.238 "bdev_auto_examine": true, 00:24:42.238 "iobuf_small_cache_size": 128, 00:24:42.238 "iobuf_large_cache_size": 16 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_raid_set_options", 00:24:42.238 "params": { 00:24:42.238 "process_window_size_kb": 1024, 00:24:42.238 "process_max_bandwidth_mb_sec": 0 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_iscsi_set_options", 00:24:42.238 "params": { 00:24:42.238 "timeout_sec": 30 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_nvme_set_options", 00:24:42.238 "params": { 00:24:42.238 "action_on_timeout": "none", 00:24:42.238 "timeout_us": 0, 00:24:42.238 "timeout_admin_us": 0, 00:24:42.238 "keep_alive_timeout_ms": 10000, 00:24:42.238 "arbitration_burst": 0, 00:24:42.238 "low_priority_weight": 0, 00:24:42.238 "medium_priority_weight": 0, 00:24:42.238 "high_priority_weight": 0, 00:24:42.238 "nvme_adminq_poll_period_us": 10000, 00:24:42.238 "nvme_ioq_poll_period_us": 0, 00:24:42.238 "io_queue_requests": 0, 00:24:42.238 "delay_cmd_submit": true, 00:24:42.238 "transport_retry_count": 4, 00:24:42.238 "bdev_retry_count": 3, 00:24:42.238 "transport_ack_timeout": 0, 00:24:42.238 "ctrlr_loss_timeout_sec": 0, 00:24:42.238 "reconnect_delay_sec": 0, 00:24:42.238 "fast_io_fail_timeout_sec": 0, 00:24:42.238 "disable_auto_failback": false, 00:24:42.238 "generate_uuids": false, 00:24:42.238 "transport_tos": 0, 00:24:42.238 "nvme_error_stat": false, 00:24:42.238 "rdma_srq_size": 0, 00:24:42.238 "io_path_stat": false, 00:24:42.238 "allow_accel_sequence": false, 00:24:42.238 "rdma_max_cq_size": 0, 00:24:42.238 "rdma_cm_event_timeout_ms": 0, 00:24:42.238 "dhchap_digests": [ 00:24:42.238 "sha256", 00:24:42.238 "sha384", 00:24:42.238 "sha512" 00:24:42.238 ], 00:24:42.238 "dhchap_dhgroups": [ 00:24:42.238 "null", 00:24:42.238 "ffdhe2048", 00:24:42.238 "ffdhe3072", 00:24:42.238 "ffdhe4096", 00:24:42.238 "ffdhe6144", 00:24:42.238 "ffdhe8192" 00:24:42.238 ] 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_nvme_set_hotplug", 00:24:42.238 "params": { 00:24:42.238 "period_us": 100000, 00:24:42.238 "enable": false 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_malloc_create", 00:24:42.238 "params": { 00:24:42.238 "name": "malloc0", 00:24:42.238 "num_blocks": 8192, 00:24:42.238 "block_size": 4096, 00:24:42.238 "physical_block_size": 4096, 00:24:42.238 "uuid": "7230e628-b81d-4e29-ae2d-dc6bbe480c17", 00:24:42.238 "optimal_io_boundary": 0, 00:24:42.238 "md_size": 0, 00:24:42.238 "dif_type": 0, 00:24:42.238 "dif_is_head_of_md": false, 00:24:42.238 "dif_pi_format": 0 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "bdev_wait_for_examine" 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "nbd", 00:24:42.238 "config": [] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "scheduler", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "framework_set_scheduler", 00:24:42.238 "params": { 00:24:42.238 "name": "static" 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "subsystem": "nvmf", 00:24:42.238 "config": [ 00:24:42.238 { 00:24:42.238 "method": "nvmf_set_config", 00:24:42.238 "params": { 00:24:42.238 "discovery_filter": "match_any", 00:24:42.238 "admin_cmd_passthru": { 00:24:42.238 "identify_ctrlr": false 00:24:42.238 }, 00:24:42.238 "dhchap_digests": [ 00:24:42.238 "sha256", 00:24:42.238 "sha384", 00:24:42.238 "sha512" 00:24:42.238 ], 00:24:42.238 "dhchap_dhgroups": [ 00:24:42.238 "null", 00:24:42.238 "ffdhe2048", 00:24:42.238 "ffdhe3072", 00:24:42.238 "ffdhe4096", 00:24:42.238 "ffdhe6144", 00:24:42.238 "ffdhe8192" 00:24:42.238 ] 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_set_max_subsystems", 00:24:42.238 "params": { 00:24:42.238 "max_subsystems": 1024 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_set_crdt", 00:24:42.238 "params": { 00:24:42.238 "crdt1": 0, 00:24:42.238 "crdt2": 0, 00:24:42.238 "crdt3": 0 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_create_transport", 00:24:42.238 "params": { 00:24:42.238 "trtype": "TCP", 00:24:42.238 "max_queue_depth": 128, 00:24:42.238 "max_io_qpairs_per_ctrlr": 127, 00:24:42.238 "in_capsule_data_size": 4096, 00:24:42.238 "max_io_size": 131072, 00:24:42.238 "io_unit_size": 131072, 00:24:42.238 "max_aq_depth": 128, 00:24:42.238 "num_shared_buffers": 511, 00:24:42.238 "buf_cache_size": 4294967295, 00:24:42.238 "dif_insert_or_strip": false, 00:24:42.238 "zcopy": false, 00:24:42.238 "c2h_success": false, 00:24:42.238 "sock_priority": 0, 00:24:42.238 "abort_timeout_sec": 1, 00:24:42.238 "ack_timeout": 0, 00:24:42.238 "data_wr_pool_size": 0 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_create_subsystem", 00:24:42.238 "params": { 00:24:42.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.238 "allow_any_host": false, 00:24:42.238 "serial_number": "00000000000000000000", 00:24:42.238 "model_number": "SPDK bdev Controller", 00:24:42.238 "max_namespaces": 32, 00:24:42.238 "min_cntlid": 1, 00:24:42.238 "max_cntlid": 65519, 00:24:42.238 "ana_reporting": false 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_subsystem_add_host", 00:24:42.238 "params": { 00:24:42.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.238 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.238 "psk": "key0" 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_subsystem_add_ns", 00:24:42.238 "params": { 00:24:42.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.238 "namespace": { 00:24:42.238 "nsid": 1, 00:24:42.238 "bdev_name": "malloc0", 00:24:42.238 "nguid": "7230E628B81D4E29AE2DDC6BBE480C17", 00:24:42.238 "uuid": "7230e628-b81d-4e29-ae2d-dc6bbe480c17", 00:24:42.238 "no_auto_visible": false 00:24:42.238 } 00:24:42.238 } 00:24:42.238 }, 00:24:42.238 { 00:24:42.238 "method": "nvmf_subsystem_add_listener", 00:24:42.238 "params": { 00:24:42.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.238 "listen_address": { 00:24:42.238 "trtype": "TCP", 00:24:42.238 "adrfam": "IPv4", 00:24:42.238 "traddr": "10.0.0.2", 00:24:42.238 "trsvcid": "4420" 00:24:42.238 }, 00:24:42.238 "secure_channel": false, 00:24:42.238 "sock_impl": "ssl" 00:24:42.238 } 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 } 00:24:42.238 ] 00:24:42.238 }' 00:24:42.238 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:42.499 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:42.499 "subsystems": [ 00:24:42.499 { 00:24:42.499 "subsystem": "keyring", 00:24:42.499 "config": [ 00:24:42.499 { 00:24:42.499 "method": "keyring_file_add_key", 00:24:42.499 "params": { 00:24:42.499 "name": "key0", 00:24:42.499 "path": "/tmp/tmp.sD2gyi45yI" 00:24:42.499 } 00:24:42.499 } 00:24:42.499 ] 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "subsystem": "iobuf", 00:24:42.499 "config": [ 00:24:42.499 { 00:24:42.499 "method": "iobuf_set_options", 00:24:42.499 "params": { 00:24:42.499 "small_pool_count": 8192, 00:24:42.499 "large_pool_count": 1024, 00:24:42.499 "small_bufsize": 8192, 00:24:42.499 "large_bufsize": 135168 00:24:42.499 } 00:24:42.499 } 00:24:42.499 ] 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "subsystem": "sock", 00:24:42.499 "config": [ 00:24:42.499 { 00:24:42.499 "method": "sock_set_default_impl", 00:24:42.499 "params": { 00:24:42.499 "impl_name": "posix" 00:24:42.499 } 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "method": "sock_impl_set_options", 00:24:42.499 "params": { 00:24:42.499 "impl_name": "ssl", 00:24:42.499 "recv_buf_size": 4096, 00:24:42.499 "send_buf_size": 4096, 00:24:42.499 "enable_recv_pipe": true, 00:24:42.499 "enable_quickack": false, 00:24:42.499 "enable_placement_id": 0, 00:24:42.499 "enable_zerocopy_send_server": true, 00:24:42.499 "enable_zerocopy_send_client": false, 00:24:42.499 "zerocopy_threshold": 0, 00:24:42.499 "tls_version": 0, 00:24:42.499 "enable_ktls": false 00:24:42.499 } 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "method": "sock_impl_set_options", 00:24:42.499 "params": { 00:24:42.499 "impl_name": "posix", 00:24:42.499 "recv_buf_size": 2097152, 00:24:42.499 "send_buf_size": 2097152, 00:24:42.499 "enable_recv_pipe": true, 00:24:42.499 "enable_quickack": false, 00:24:42.499 "enable_placement_id": 0, 00:24:42.499 "enable_zerocopy_send_server": true, 00:24:42.499 "enable_zerocopy_send_client": false, 00:24:42.499 "zerocopy_threshold": 0, 00:24:42.499 "tls_version": 0, 00:24:42.499 "enable_ktls": false 00:24:42.499 } 00:24:42.499 } 00:24:42.499 ] 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "subsystem": "vmd", 00:24:42.499 "config": [] 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "subsystem": "accel", 00:24:42.499 "config": [ 00:24:42.499 { 00:24:42.499 "method": "accel_set_options", 00:24:42.499 "params": { 00:24:42.499 "small_cache_size": 128, 00:24:42.499 "large_cache_size": 16, 00:24:42.499 "task_count": 2048, 00:24:42.499 "sequence_count": 2048, 00:24:42.499 "buf_count": 2048 00:24:42.499 } 00:24:42.499 } 00:24:42.499 ] 00:24:42.499 }, 00:24:42.499 { 00:24:42.499 "subsystem": "bdev", 00:24:42.499 "config": [ 00:24:42.499 { 00:24:42.499 "method": "bdev_set_options", 00:24:42.499 "params": { 00:24:42.499 "bdev_io_pool_size": 65535, 00:24:42.499 "bdev_io_cache_size": 256, 00:24:42.500 "bdev_auto_examine": true, 00:24:42.500 "iobuf_small_cache_size": 128, 00:24:42.500 "iobuf_large_cache_size": 16 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_raid_set_options", 00:24:42.500 "params": { 00:24:42.500 "process_window_size_kb": 1024, 00:24:42.500 "process_max_bandwidth_mb_sec": 0 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_iscsi_set_options", 00:24:42.500 "params": { 00:24:42.500 "timeout_sec": 30 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_nvme_set_options", 00:24:42.500 "params": { 00:24:42.500 "action_on_timeout": "none", 00:24:42.500 "timeout_us": 0, 00:24:42.500 "timeout_admin_us": 0, 00:24:42.500 "keep_alive_timeout_ms": 10000, 00:24:42.500 "arbitration_burst": 0, 00:24:42.500 "low_priority_weight": 0, 00:24:42.500 "medium_priority_weight": 0, 00:24:42.500 "high_priority_weight": 0, 00:24:42.500 "nvme_adminq_poll_period_us": 10000, 00:24:42.500 "nvme_ioq_poll_period_us": 0, 00:24:42.500 "io_queue_requests": 512, 00:24:42.500 "delay_cmd_submit": true, 00:24:42.500 "transport_retry_count": 4, 00:24:42.500 "bdev_retry_count": 3, 00:24:42.500 "transport_ack_timeout": 0, 00:24:42.500 "ctrlr_loss_timeout_sec": 0, 00:24:42.500 "reconnect_delay_sec": 0, 00:24:42.500 "fast_io_fail_timeout_sec": 0, 00:24:42.500 "disable_auto_failback": false, 00:24:42.500 "generate_uuids": false, 00:24:42.500 "transport_tos": 0, 00:24:42.500 "nvme_error_stat": false, 00:24:42.500 "rdma_srq_size": 0, 00:24:42.500 "io_path_stat": false, 00:24:42.500 "allow_accel_sequence": false, 00:24:42.500 "rdma_max_cq_size": 0, 00:24:42.500 "rdma_cm_event_timeout_ms": 0, 00:24:42.500 "dhchap_digests": [ 00:24:42.500 "sha256", 00:24:42.500 "sha384", 00:24:42.500 "sha512" 00:24:42.500 ], 00:24:42.500 "dhchap_dhgroups": [ 00:24:42.500 "null", 00:24:42.500 "ffdhe2048", 00:24:42.500 "ffdhe3072", 00:24:42.500 "ffdhe4096", 00:24:42.500 "ffdhe6144", 00:24:42.500 "ffdhe8192" 00:24:42.500 ] 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_nvme_attach_controller", 00:24:42.500 "params": { 00:24:42.500 "name": "nvme0", 00:24:42.500 "trtype": "TCP", 00:24:42.500 "adrfam": "IPv4", 00:24:42.500 "traddr": "10.0.0.2", 00:24:42.500 "trsvcid": "4420", 00:24:42.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.500 "prchk_reftag": false, 00:24:42.500 "prchk_guard": false, 00:24:42.500 "ctrlr_loss_timeout_sec": 0, 00:24:42.500 "reconnect_delay_sec": 0, 00:24:42.500 "fast_io_fail_timeout_sec": 0, 00:24:42.500 "psk": "key0", 00:24:42.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.500 "hdgst": false, 00:24:42.500 "ddgst": false 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_nvme_set_hotplug", 00:24:42.500 "params": { 00:24:42.500 "period_us": 100000, 00:24:42.500 "enable": false 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_enable_histogram", 00:24:42.500 "params": { 00:24:42.500 "name": "nvme0n1", 00:24:42.500 "enable": true 00:24:42.500 } 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "method": "bdev_wait_for_examine" 00:24:42.500 } 00:24:42.500 ] 00:24:42.500 }, 00:24:42.500 { 00:24:42.500 "subsystem": "nbd", 00:24:42.500 "config": [] 00:24:42.500 } 00:24:42.500 ] 00:24:42.500 }' 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 736445 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 736445 ']' 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 736445 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736445 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736445' 00:24:42.500 killing process with pid 736445 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 736445 00:24:42.500 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.500 00:24:42.500 Latency(us) 00:24:42.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.500 =================================================================================================================== 00:24:42.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.500 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 736445 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 736309 ']' 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736309' 00:24:42.761 killing process with pid 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 736309 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.761 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:42.761 "subsystems": [ 00:24:42.761 { 00:24:42.761 "subsystem": "keyring", 00:24:42.761 "config": [ 00:24:42.761 { 00:24:42.761 "method": "keyring_file_add_key", 00:24:42.761 "params": { 00:24:42.761 "name": "key0", 00:24:42.761 "path": "/tmp/tmp.sD2gyi45yI" 00:24:42.761 } 00:24:42.761 } 00:24:42.761 ] 00:24:42.761 }, 00:24:42.761 { 00:24:42.761 "subsystem": "iobuf", 00:24:42.761 "config": [ 00:24:42.761 { 00:24:42.761 "method": "iobuf_set_options", 00:24:42.761 "params": { 00:24:42.761 "small_pool_count": 8192, 00:24:42.761 "large_pool_count": 1024, 00:24:42.761 "small_bufsize": 8192, 00:24:42.761 "large_bufsize": 135168 00:24:42.761 } 00:24:42.761 } 00:24:42.761 ] 00:24:42.761 }, 00:24:42.761 { 00:24:42.761 "subsystem": "sock", 00:24:42.761 "config": [ 00:24:42.761 { 00:24:42.761 "method": "sock_set_default_impl", 00:24:42.761 "params": { 00:24:42.761 "impl_name": "posix" 00:24:42.761 } 00:24:42.761 }, 00:24:42.761 { 00:24:42.761 "method": "sock_impl_set_options", 00:24:42.761 "params": { 00:24:42.761 "impl_name": "ssl", 00:24:42.761 "recv_buf_size": 4096, 00:24:42.761 "send_buf_size": 4096, 00:24:42.761 "enable_recv_pipe": true, 00:24:42.761 "enable_quickack": false, 00:24:42.761 "enable_placement_id": 0, 00:24:42.761 "enable_zerocopy_send_server": true, 00:24:42.761 "enable_zerocopy_send_client": false, 00:24:42.761 "zerocopy_threshold": 0, 00:24:42.761 "tls_version": 0, 00:24:42.762 "enable_ktls": false 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "sock_impl_set_options", 00:24:42.762 "params": { 00:24:42.762 "impl_name": "posix", 00:24:42.762 "recv_buf_size": 2097152, 00:24:42.762 "send_buf_size": 2097152, 00:24:42.762 "enable_recv_pipe": true, 00:24:42.762 "enable_quickack": false, 00:24:42.762 "enable_placement_id": 0, 00:24:42.762 "enable_zerocopy_send_server": true, 00:24:42.762 "enable_zerocopy_send_client": false, 00:24:42.762 "zerocopy_threshold": 0, 00:24:42.762 "tls_version": 0, 00:24:42.762 "enable_ktls": false 00:24:42.762 } 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "vmd", 00:24:42.762 "config": [] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "accel", 00:24:42.762 "config": [ 00:24:42.762 { 00:24:42.762 "method": "accel_set_options", 00:24:42.762 "params": { 00:24:42.762 "small_cache_size": 128, 00:24:42.762 "large_cache_size": 16, 00:24:42.762 "task_count": 2048, 00:24:42.762 "sequence_count": 2048, 00:24:42.762 "buf_count": 2048 00:24:42.762 } 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "bdev", 00:24:42.762 "config": [ 00:24:42.762 { 00:24:42.762 "method": "bdev_set_options", 00:24:42.762 "params": { 00:24:42.762 "bdev_io_pool_size": 65535, 00:24:42.762 "bdev_io_cache_size": 256, 00:24:42.762 "bdev_auto_examine": true, 00:24:42.762 "iobuf_small_cache_size": 128, 00:24:42.762 "iobuf_large_cache_size": 16 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_raid_set_options", 00:24:42.762 "params": { 00:24:42.762 "process_window_size_kb": 1024, 00:24:42.762 "process_max_bandwidth_mb_sec": 0 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_iscsi_set_options", 00:24:42.762 "params": { 00:24:42.762 "timeout_sec": 30 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_nvme_set_options", 00:24:42.762 "params": { 00:24:42.762 "action_on_timeout": "none", 00:24:42.762 "timeout_us": 0, 00:24:42.762 "timeout_admin_us": 0, 00:24:42.762 "keep_alive_timeout_ms": 10000, 00:24:42.762 "arbitration_burst": 0, 00:24:42.762 "low_priority_weight": 0, 00:24:42.762 "medium_priority_weight": 0, 00:24:42.762 "high_priority_weight": 0, 00:24:42.762 "nvme_adminq_poll_period_us": 10000, 00:24:42.762 "nvme_ioq_poll_period_us": 0, 00:24:42.762 "io_queue_requests": 0, 00:24:42.762 "delay_cmd_submit": true, 00:24:42.762 "transport_retry_count": 4, 00:24:42.762 "bdev_retry_count": 3, 00:24:42.762 "transport_ack_timeout": 0, 00:24:42.762 "ctrlr_loss_timeout_sec": 0, 00:24:42.762 "reconnect_delay_sec": 0, 00:24:42.762 "fast_io_fail_timeout_sec": 0, 00:24:42.762 "disable_auto_failback": false, 00:24:42.762 "generate_uuids": false, 00:24:42.762 "transport_tos": 0, 00:24:42.762 "nvme_error_stat": false, 00:24:42.762 "rdma_srq_size": 0, 00:24:42.762 "io_path_stat": false, 00:24:42.762 "allow_accel_sequence": false, 00:24:42.762 "rdma_max_cq_size": 0, 00:24:42.762 "rdma_cm_event_timeout_ms": 0, 00:24:42.762 "dhchap_digests": [ 00:24:42.762 "sha256", 00:24:42.762 "sha384", 00:24:42.762 "sha512" 00:24:42.762 ], 00:24:42.762 "dhchap_dhgroups": [ 00:24:42.762 "null", 00:24:42.762 "ffdhe2048", 00:24:42.762 "ffdhe3072", 00:24:42.762 "ffdhe4096", 00:24:42.762 "ffdhe6144", 00:24:42.762 "ffdhe8192" 00:24:42.762 ] 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_nvme_set_hotplug", 00:24:42.762 "params": { 00:24:42.762 "period_us": 100000, 00:24:42.762 "enable": false 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_malloc_create", 00:24:42.762 "params": { 00:24:42.762 "name": "malloc0", 00:24:42.762 "num_blocks": 8192, 00:24:42.762 "block_size": 4096, 00:24:42.762 "physical_block_size": 4096, 00:24:42.762 "uuid": "7230e628-b81d-4e29-ae2d-dc6bbe480c17", 00:24:42.762 "optimal_io_boundary": 0, 00:24:42.762 "md_size": 0, 00:24:42.762 "dif_type": 0, 00:24:42.762 "dif_is_head_of_md": false, 00:24:42.762 "dif_pi_format": 0 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "bdev_wait_for_examine" 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "nbd", 00:24:42.762 "config": [] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "scheduler", 00:24:42.762 "config": [ 00:24:42.762 { 00:24:42.762 "method": "framework_set_scheduler", 00:24:42.762 "params": { 00:24:42.762 "name": "static" 00:24:42.762 } 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "subsystem": "nvmf", 00:24:42.762 "config": [ 00:24:42.762 { 00:24:42.762 "method": "nvmf_set_config", 00:24:42.762 "params": { 00:24:42.762 "discovery_filter": "match_any", 00:24:42.762 "admin_cmd_passthru": { 00:24:42.762 "identify_ctrlr": false 00:24:42.762 }, 00:24:42.762 "dhchap_digests": [ 00:24:42.762 "sha256", 00:24:42.762 "sha384", 00:24:42.762 "sha512" 00:24:42.762 ], 00:24:42.762 "dhchap_dhgroups": [ 00:24:42.762 "null", 00:24:42.762 "ffdhe2048", 00:24:42.762 "ffdhe3072", 00:24:42.762 "ffdhe4096", 00:24:42.762 "ffdhe6144", 00:24:42.762 "ffdhe8192" 00:24:42.762 ] 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_set_max_subsystems", 00:24:42.762 "params": { 00:24:42.762 "max_subsystems": 1024 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_set_crdt", 00:24:42.762 "params": { 00:24:42.762 "crdt1": 0, 00:24:42.762 "crdt2": 0, 00:24:42.762 "crdt3": 0 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_create_transport", 00:24:42.762 "params": { 00:24:42.762 "trtype": "TCP", 00:24:42.762 "max_queue_depth": 128, 00:24:42.762 "max_io_qpairs_per_ctrlr": 127, 00:24:42.762 "in_capsule_data_size": 4096, 00:24:42.762 "max_io_size": 131072, 00:24:42.762 "io_unit_size": 131072, 00:24:42.762 "max_aq_depth": 128, 00:24:42.762 "num_shared_buffers": 511, 00:24:42.762 "buf_cache_size": 4294967295, 00:24:42.762 "dif_insert_or_strip": false, 00:24:42.762 "zcopy": false, 00:24:42.762 "c2h_success": false, 00:24:42.762 "sock_priority": 0, 00:24:42.762 "abort_timeout_sec": 1, 00:24:42.762 "ack_timeout": 0, 00:24:42.762 "data_wr_pool_size": 0 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_create_subsystem", 00:24:42.762 "params": { 00:24:42.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.762 "allow_any_host": false, 00:24:42.762 "serial_number": "00000000000000000000", 00:24:42.762 "model_number": "SPDK bdev Controller", 00:24:42.762 "max_namespaces": 32, 00:24:42.762 "min_cntlid": 1, 00:24:42.762 "max_cntlid": 65519, 00:24:42.762 "ana_reporting": false 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_subsystem_add_host", 00:24:42.762 "params": { 00:24:42.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.762 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.762 "psk": "key0" 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_subsystem_add_ns", 00:24:42.762 "params": { 00:24:42.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.762 "namespace": { 00:24:42.762 "nsid": 1, 00:24:42.762 "bdev_name": "malloc0", 00:24:42.762 "nguid": "7230E628B81D4E29AE2DDC6BBE480C17", 00:24:42.762 "uuid": "7230e628-b81d-4e29-ae2d-dc6bbe480c17", 00:24:42.762 "no_auto_visible": false 00:24:42.762 } 00:24:42.762 } 00:24:42.762 }, 00:24:42.762 { 00:24:42.762 "method": "nvmf_subsystem_add_listener", 00:24:42.762 "params": { 00:24:42.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.762 "listen_address": { 00:24:42.762 "trtype": "TCP", 00:24:42.762 "adrfam": "IPv4", 00:24:42.762 "traddr": "10.0.0.2", 00:24:42.762 "trsvcid": "4420" 00:24:42.762 }, 00:24:42.762 "secure_channel": false, 00:24:42.762 "sock_impl": "ssl" 00:24:42.762 } 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 } 00:24:42.762 ] 00:24:42.762 }' 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=737027 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 737027 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 737027 ']' 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.762 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.763 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.763 22:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.023 [2024-09-30 22:53:09.778755] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:43.023 [2024-09-30 22:53:09.778816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.023 [2024-09-30 22:53:09.862809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.023 [2024-09-30 22:53:09.917056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.023 [2024-09-30 22:53:09.917089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.023 [2024-09-30 22:53:09.917095] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.023 [2024-09-30 22:53:09.917100] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.023 [2024-09-30 22:53:09.917104] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.023 [2024-09-30 22:53:09.917150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.284 [2024-09-30 22:53:10.128240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.284 [2024-09-30 22:53:10.160241] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.284 [2024-09-30 22:53:10.160420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=737372 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 737372 /var/tmp/bdevperf.sock 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 737372 ']' 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:43.856 "subsystems": [ 00:24:43.856 { 00:24:43.856 "subsystem": "keyring", 00:24:43.856 "config": [ 00:24:43.856 { 00:24:43.856 "method": "keyring_file_add_key", 00:24:43.856 "params": { 00:24:43.856 "name": "key0", 00:24:43.856 "path": "/tmp/tmp.sD2gyi45yI" 00:24:43.856 } 00:24:43.856 } 00:24:43.856 ] 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "subsystem": "iobuf", 00:24:43.856 "config": [ 00:24:43.856 { 00:24:43.856 "method": "iobuf_set_options", 00:24:43.856 "params": { 00:24:43.856 "small_pool_count": 8192, 00:24:43.856 "large_pool_count": 1024, 00:24:43.856 "small_bufsize": 8192, 00:24:43.856 "large_bufsize": 135168 00:24:43.856 } 00:24:43.856 } 00:24:43.856 ] 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "subsystem": "sock", 00:24:43.856 "config": [ 00:24:43.856 { 00:24:43.856 "method": "sock_set_default_impl", 00:24:43.856 "params": { 00:24:43.856 "impl_name": "posix" 00:24:43.856 } 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "method": "sock_impl_set_options", 00:24:43.856 "params": { 00:24:43.856 "impl_name": "ssl", 00:24:43.856 "recv_buf_size": 4096, 00:24:43.856 "send_buf_size": 4096, 00:24:43.856 "enable_recv_pipe": true, 00:24:43.856 "enable_quickack": false, 00:24:43.856 "enable_placement_id": 0, 00:24:43.856 "enable_zerocopy_send_server": true, 00:24:43.856 "enable_zerocopy_send_client": false, 00:24:43.856 "zerocopy_threshold": 0, 00:24:43.856 "tls_version": 0, 00:24:43.856 "enable_ktls": false 00:24:43.856 } 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "method": "sock_impl_set_options", 00:24:43.856 "params": { 00:24:43.856 "impl_name": "posix", 00:24:43.856 "recv_buf_size": 2097152, 00:24:43.856 "send_buf_size": 2097152, 00:24:43.856 "enable_recv_pipe": true, 00:24:43.856 "enable_quickack": false, 00:24:43.856 "enable_placement_id": 0, 00:24:43.856 "enable_zerocopy_send_server": true, 00:24:43.856 "enable_zerocopy_send_client": false, 00:24:43.856 "zerocopy_threshold": 0, 00:24:43.856 "tls_version": 0, 00:24:43.856 "enable_ktls": false 00:24:43.856 } 00:24:43.856 } 00:24:43.856 ] 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "subsystem": "vmd", 00:24:43.856 "config": [] 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "subsystem": "accel", 00:24:43.856 "config": [ 00:24:43.856 { 00:24:43.856 "method": "accel_set_options", 00:24:43.856 "params": { 00:24:43.856 "small_cache_size": 128, 00:24:43.856 "large_cache_size": 16, 00:24:43.856 "task_count": 2048, 00:24:43.856 "sequence_count": 2048, 00:24:43.856 "buf_count": 2048 00:24:43.856 } 00:24:43.856 } 00:24:43.856 ] 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "subsystem": "bdev", 00:24:43.856 "config": [ 00:24:43.856 { 00:24:43.856 "method": "bdev_set_options", 00:24:43.856 "params": { 00:24:43.856 "bdev_io_pool_size": 65535, 00:24:43.856 "bdev_io_cache_size": 256, 00:24:43.856 "bdev_auto_examine": true, 00:24:43.856 "iobuf_small_cache_size": 128, 00:24:43.856 "iobuf_large_cache_size": 16 00:24:43.856 } 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "method": "bdev_raid_set_options", 00:24:43.856 "params": { 00:24:43.856 "process_window_size_kb": 1024, 00:24:43.856 "process_max_bandwidth_mb_sec": 0 00:24:43.856 } 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "method": "bdev_iscsi_set_options", 00:24:43.856 "params": { 00:24:43.856 "timeout_sec": 30 00:24:43.856 } 00:24:43.856 }, 00:24:43.856 { 00:24:43.856 "method": "bdev_nvme_set_options", 00:24:43.856 "params": { 00:24:43.856 "action_on_timeout": "none", 00:24:43.856 "timeout_us": 0, 00:24:43.856 "timeout_admin_us": 0, 00:24:43.856 "keep_alive_timeout_ms": 10000, 00:24:43.856 "arbitration_burst": 0, 00:24:43.856 "low_priority_weight": 0, 00:24:43.856 "medium_priority_weight": 0, 00:24:43.856 "high_priority_weight": 0, 00:24:43.856 "nvme_adminq_poll_period_us": 10000, 00:24:43.856 "nvme_ioq_poll_period_us": 0, 00:24:43.856 "io_queue_requests": 512, 00:24:43.856 "delay_cmd_submit": true, 00:24:43.856 "transport_retry_count": 4, 00:24:43.856 "bdev_retry_count": 3, 00:24:43.856 "transport_ack_timeout": 0, 00:24:43.856 "ctrlr_loss_timeout_sec": 0, 00:24:43.856 "reconnect_delay_sec": 0, 00:24:43.856 "fast_io_fail_timeout_sec": 0, 00:24:43.856 "disable_auto_failback": false, 00:24:43.856 "generate_uuids": false, 00:24:43.856 "transport_tos": 0, 00:24:43.856 "nvme_error_stat": false, 00:24:43.856 "rdma_srq_size": 0, 00:24:43.856 "io_path_stat": false, 00:24:43.856 "allow_accel_sequence": false, 00:24:43.856 "rdma_max_cq_size": 0, 00:24:43.856 "rdma_cm_event_timeout_ms": 0, 00:24:43.856 "dhchap_digests": [ 00:24:43.856 "sha256", 00:24:43.857 "sha384", 00:24:43.857 "sha512" 00:24:43.857 ], 00:24:43.857 "dhchap_dhgroups": [ 00:24:43.857 "null", 00:24:43.857 "ffdhe2048", 00:24:43.857 "ffdhe3072", 00:24:43.857 "ffdhe4096", 00:24:43.857 "ffdhe6144", 00:24:43.857 "ffdhe8192" 00:24:43.857 ] 00:24:43.857 } 00:24:43.857 }, 00:24:43.857 { 00:24:43.857 "method": "bdev_nvme_attach_controller", 00:24:43.857 "params": { 00:24:43.857 "name": "nvme0", 00:24:43.857 "trtype": "TCP", 00:24:43.857 "adrfam": "IPv4", 00:24:43.857 "traddr": "10.0.0.2", 00:24:43.857 "trsvcid": "4420", 00:24:43.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.857 "prchk_reftag": false, 00:24:43.857 "prchk_guard": false, 00:24:43.857 "ctrlr_loss_timeout_sec": 0, 00:24:43.857 "reconnect_delay_sec": 0, 00:24:43.857 "fast_io_fail_timeout_sec": 0, 00:24:43.857 "psk": "key0", 00:24:43.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.857 "hdgst": false, 00:24:43.857 "ddgst": false 00:24:43.857 } 00:24:43.857 }, 00:24:43.857 { 00:24:43.857 "method": "bdev_nvme_set_hotplug", 00:24:43.857 "params": { 00:24:43.857 "period_us": 100000, 00:24:43.857 "enable": false 00:24:43.857 } 00:24:43.857 }, 00:24:43.857 { 00:24:43.857 "method": "bdev_enable_histogram", 00:24:43.857 "params": { 00:24:43.857 "name": "nvme0n1", 00:24:43.857 "enable": true 00:24:43.857 } 00:24:43.857 }, 00:24:43.857 { 00:24:43.857 "method": "bdev_wait_for_examine" 00:24:43.857 } 00:24:43.857 ] 00:24:43.857 }, 00:24:43.857 { 00:24:43.857 "subsystem": "nbd", 00:24:43.857 "config": [] 00:24:43.857 } 00:24:43.857 ] 00:24:43.857 }' 00:24:43.857 [2024-09-30 22:53:10.657375] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:43.857 [2024-09-30 22:53:10.657430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737372 ] 00:24:43.857 [2024-09-30 22:53:10.735621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.857 [2024-09-30 22:53:10.789176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.118 [2024-09-30 22:53:10.924487] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.688 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.947 Running I/O for 1 seconds... 00:24:45.887 5569.00 IOPS, 21.75 MiB/s 00:24:45.887 Latency(us) 00:24:45.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.887 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:45.887 Verification LBA range: start 0x0 length 0x2000 00:24:45.887 nvme0n1 : 1.02 5594.55 21.85 0.00 0.00 22711.82 4778.67 23920.64 00:24:45.887 =================================================================================================================== 00:24:45.887 Total : 5594.55 21.85 0.00 0.00 22711.82 4778.67 23920.64 00:24:45.887 { 00:24:45.887 "results": [ 00:24:45.887 { 00:24:45.887 "job": "nvme0n1", 00:24:45.887 "core_mask": "0x2", 00:24:45.887 "workload": "verify", 00:24:45.887 "status": "finished", 00:24:45.887 "verify_range": { 00:24:45.887 "start": 0, 00:24:45.887 "length": 8192 00:24:45.887 }, 00:24:45.887 "queue_depth": 128, 00:24:45.887 "io_size": 4096, 00:24:45.887 "runtime": 1.018491, 00:24:45.887 "iops": 5594.551154600286, 00:24:45.887 "mibps": 21.85371544765737, 00:24:45.887 "io_failed": 0, 00:24:45.887 "io_timeout": 0, 00:24:45.887 "avg_latency_us": 22711.816805896808, 00:24:45.887 "min_latency_us": 4778.666666666667, 00:24:45.887 "max_latency_us": 23920.64 00:24:45.887 } 00:24:45.887 ], 00:24:45.887 "core_count": 1 00:24:45.887 } 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:45.887 nvmf_trace.0 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 737372 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 737372 ']' 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 737372 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.887 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737372 00:24:46.147 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:46.147 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:46.147 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737372' 00:24:46.147 killing process with pid 737372 00:24:46.147 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 737372 00:24:46.147 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.147 00:24:46.147 Latency(us) 00:24:46.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.147 =================================================================================================================== 00:24:46.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.147 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 737372 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.147 rmmod nvme_tcp 00:24:46.147 rmmod nvme_fabrics 00:24:46.147 rmmod nvme_keyring 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 737027 ']' 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 737027 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 737027 ']' 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 737027 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.147 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737027 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737027' 00:24:46.468 killing process with pid 737027 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 737027 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 737027 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.468 22:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BOSberIpCK /tmp/tmp.EalY2zLeIu /tmp/tmp.sD2gyi45yI 00:24:49.011 00:24:49.011 real 1m28.675s 00:24:49.011 user 2m18.410s 00:24:49.011 sys 0m26.854s 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.011 ************************************ 00:24:49.011 END TEST nvmf_tls 00:24:49.011 ************************************ 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.011 ************************************ 00:24:49.011 START TEST nvmf_fips 00:24:49.011 ************************************ 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:49.011 * Looking for test storage... 00:24:49.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.011 --rc genhtml_branch_coverage=1 00:24:49.011 --rc genhtml_function_coverage=1 00:24:49.011 --rc genhtml_legend=1 00:24:49.011 --rc geninfo_all_blocks=1 00:24:49.011 --rc geninfo_unexecuted_blocks=1 00:24:49.011 00:24:49.011 ' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.011 --rc genhtml_branch_coverage=1 00:24:49.011 --rc genhtml_function_coverage=1 00:24:49.011 --rc genhtml_legend=1 00:24:49.011 --rc geninfo_all_blocks=1 00:24:49.011 --rc geninfo_unexecuted_blocks=1 00:24:49.011 00:24:49.011 ' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.011 --rc genhtml_branch_coverage=1 00:24:49.011 --rc genhtml_function_coverage=1 00:24:49.011 --rc genhtml_legend=1 00:24:49.011 --rc geninfo_all_blocks=1 00:24:49.011 --rc geninfo_unexecuted_blocks=1 00:24:49.011 00:24:49.011 ' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.011 --rc genhtml_branch_coverage=1 00:24:49.011 --rc genhtml_function_coverage=1 00:24:49.011 --rc genhtml_legend=1 00:24:49.011 --rc geninfo_all_blocks=1 00:24:49.011 --rc geninfo_unexecuted_blocks=1 00:24:49.011 00:24:49.011 ' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.011 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:49.012 Error setting digest 00:24:49.012 40729581077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:49.012 40729581077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.012 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.013 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:49.013 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:49.013 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.013 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:57.152 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:57.152 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.152 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:57.153 Found net devices under 0000:31:00.0: cvl_0_0 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:57.153 Found net devices under 0000:31:00.1: cvl_0_1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:57.153 00:24:57.153 --- 10.0.0.2 ping statistics --- 00:24:57.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.153 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:57.153 00:24:57.153 --- 10.0.0.1 ping statistics --- 00:24:57.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.153 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=742153 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 742153 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 742153 ']' 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.153 22:53:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.153 [2024-09-30 22:53:23.700218] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:57.153 [2024-09-30 22:53:23.700292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.153 [2024-09-30 22:53:23.793345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.153 [2024-09-30 22:53:23.886124] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.153 [2024-09-30 22:53:23.886186] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.153 [2024-09-30 22:53:23.886195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.153 [2024-09-30 22:53:23.886202] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.153 [2024-09-30 22:53:23.886208] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.153 [2024-09-30 22:53:23.886239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ok7 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ok7 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ok7 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ok7 00:24:57.728 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.728 [2024-09-30 22:53:24.735601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.990 [2024-09-30 22:53:24.751602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.990 [2024-09-30 22:53:24.751960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.990 malloc0 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=742502 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 742502 /var/tmp/bdevperf.sock 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 742502 ']' 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.990 22:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.990 [2024-09-30 22:53:24.912788] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:24:57.990 [2024-09-30 22:53:24.912863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742502 ] 00:24:57.990 [2024-09-30 22:53:24.995836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.252 [2024-09-30 22:53:25.086738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.823 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.823 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:58.823 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ok7 00:24:59.086 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.086 [2024-09-30 22:53:26.081172] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.347 TLSTESTn1 00:24:59.347 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.347 Running I/O for 10 seconds... 00:25:09.644 4218.00 IOPS, 16.48 MiB/s 4698.50 IOPS, 18.35 MiB/s 4482.67 IOPS, 17.51 MiB/s 4603.50 IOPS, 17.98 MiB/s 4716.40 IOPS, 18.42 MiB/s 4778.67 IOPS, 18.67 MiB/s 4855.29 IOPS, 18.97 MiB/s 4858.62 IOPS, 18.98 MiB/s 4894.89 IOPS, 19.12 MiB/s 4927.50 IOPS, 19.25 MiB/s 00:25:09.644 Latency(us) 00:25:09.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.644 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.644 Verification LBA range: start 0x0 length 0x2000 00:25:09.644 TLSTESTn1 : 10.02 4930.40 19.26 0.00 0.00 25919.96 5079.04 45219.84 00:25:09.644 =================================================================================================================== 00:25:09.644 Total : 4930.40 19.26 0.00 0.00 25919.96 5079.04 45219.84 00:25:09.644 { 00:25:09.644 "results": [ 00:25:09.644 { 00:25:09.644 "job": "TLSTESTn1", 00:25:09.644 "core_mask": "0x4", 00:25:09.644 "workload": "verify", 00:25:09.644 "status": "finished", 00:25:09.644 "verify_range": { 00:25:09.644 "start": 0, 00:25:09.644 "length": 8192 00:25:09.644 }, 00:25:09.644 "queue_depth": 128, 00:25:09.645 "io_size": 4096, 00:25:09.645 "runtime": 10.019868, 00:25:09.645 "iops": 4930.404272790819, 00:25:09.645 "mibps": 19.259391690589137, 00:25:09.645 "io_failed": 0, 00:25:09.645 "io_timeout": 0, 00:25:09.645 "avg_latency_us": 25919.964244362574, 00:25:09.645 "min_latency_us": 5079.04, 00:25:09.645 "max_latency_us": 45219.84 00:25:09.645 } 00:25:09.645 ], 00:25:09.645 "core_count": 1 00:25:09.645 } 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:09.645 nvmf_trace.0 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 742502 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 742502 ']' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 742502 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742502 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742502' 00:25:09.645 killing process with pid 742502 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 742502 00:25:09.645 Received shutdown signal, test time was about 10.000000 seconds 00:25:09.645 00:25:09.645 Latency(us) 00:25:09.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.645 =================================================================================================================== 00:25:09.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 742502 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.645 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.645 rmmod nvme_tcp 00:25:09.645 rmmod nvme_fabrics 00:25:09.906 rmmod nvme_keyring 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 742153 ']' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 742153 ']' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742153' 00:25:09.906 killing process with pid 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 742153 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.906 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.456 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.456 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ok7 00:25:12.456 00:25:12.456 real 0m23.492s 00:25:12.456 user 0m24.668s 00:25:12.456 sys 0m10.204s 00:25:12.456 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.456 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:12.456 ************************************ 00:25:12.456 END TEST nvmf_fips 00:25:12.456 ************************************ 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:12.456 ************************************ 00:25:12.456 START TEST nvmf_control_msg_list 00:25:12.456 ************************************ 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:12.456 * Looking for test storage... 00:25:12.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.456 --rc genhtml_branch_coverage=1 00:25:12.456 --rc genhtml_function_coverage=1 00:25:12.456 --rc genhtml_legend=1 00:25:12.456 --rc geninfo_all_blocks=1 00:25:12.456 --rc geninfo_unexecuted_blocks=1 00:25:12.456 00:25:12.456 ' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.456 --rc genhtml_branch_coverage=1 00:25:12.456 --rc genhtml_function_coverage=1 00:25:12.456 --rc genhtml_legend=1 00:25:12.456 --rc geninfo_all_blocks=1 00:25:12.456 --rc geninfo_unexecuted_blocks=1 00:25:12.456 00:25:12.456 ' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.456 --rc genhtml_branch_coverage=1 00:25:12.456 --rc genhtml_function_coverage=1 00:25:12.456 --rc genhtml_legend=1 00:25:12.456 --rc geninfo_all_blocks=1 00:25:12.456 --rc geninfo_unexecuted_blocks=1 00:25:12.456 00:25:12.456 ' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.456 --rc genhtml_branch_coverage=1 00:25:12.456 --rc genhtml_function_coverage=1 00:25:12.456 --rc genhtml_legend=1 00:25:12.456 --rc geninfo_all_blocks=1 00:25:12.456 --rc geninfo_unexecuted_blocks=1 00:25:12.456 00:25:12.456 ' 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.456 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.457 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:20.599 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:20.600 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:20.600 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:20.600 Found net devices under 0000:31:00.0: cvl_0_0 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:20.600 Found net devices under 0000:31:00.1: cvl_0_1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:25:20.600 00:25:20.600 --- 10.0.0.2 ping statistics --- 00:25:20.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.600 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:25:20.600 00:25:20.600 --- 10.0.0.1 ping statistics --- 00:25:20.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.600 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:20.600 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=748919 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 748919 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 748919 ']' 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.601 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.601 [2024-09-30 22:53:46.995551] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:25:20.601 [2024-09-30 22:53:46.995606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.601 [2024-09-30 22:53:47.084256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.601 [2024-09-30 22:53:47.180279] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.601 [2024-09-30 22:53:47.180345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.601 [2024-09-30 22:53:47.180354] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.601 [2024-09-30 22:53:47.180362] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.601 [2024-09-30 22:53:47.180368] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.601 [2024-09-30 22:53:47.180395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.861 [2024-09-30 22:53:47.870672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.861 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.122 Malloc0 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.122 [2024-09-30 22:53:47.942164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.122 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=749267 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=749268 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=749269 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 749267 00:25:21.123 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.123 [2024-09-30 22:53:48.022663] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:21.123 [2024-09-30 22:53:48.032712] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:21.123 [2024-09-30 22:53:48.033021] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:22.068 Initializing NVMe Controllers 00:25:22.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:22.068 Initialization complete. Launching workers. 00:25:22.068 ======================================================== 00:25:22.068 Latency(us) 00:25:22.068 Device Information : IOPS MiB/s Average min max 00:25:22.068 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1496.00 5.84 668.16 256.28 904.09 00:25:22.068 ======================================================== 00:25:22.068 Total : 1496.00 5.84 668.16 256.28 904.09 00:25:22.068 00:25:22.328 Initializing NVMe Controllers 00:25:22.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:22.328 Initialization complete. Launching workers. 00:25:22.328 ======================================================== 00:25:22.328 Latency(us) 00:25:22.328 Device Information : IOPS MiB/s Average min max 00:25:22.328 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1453.00 5.68 688.06 190.17 946.86 00:25:22.328 ======================================================== 00:25:22.328 Total : 1453.00 5.68 688.06 190.17 946.86 00:25:22.328 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 749268 00:25:22.328 Initializing NVMe Controllers 00:25:22.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:22.328 Initialization complete. Launching workers. 00:25:22.328 ======================================================== 00:25:22.328 Latency(us) 00:25:22.328 Device Information : IOPS MiB/s Average min max 00:25:22.328 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1488.00 5.81 671.75 210.87 893.55 00:25:22.328 ======================================================== 00:25:22.328 Total : 1488.00 5.81 671.75 210.87 893.55 00:25:22.328 00:25:22.328 [2024-09-30 22:53:49.266820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7ac80 is same with the state(6) to be set 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 749269 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.328 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:22.329 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.329 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.329 rmmod nvme_tcp 00:25:22.329 rmmod nvme_fabrics 00:25:22.329 rmmod nvme_keyring 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 748919 ']' 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 748919 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 748919 ']' 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 748919 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 748919 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 748919' 00:25:22.595 killing process with pid 748919 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 748919 00:25:22.595 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 748919 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.899 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.829 00:25:24.829 real 0m12.633s 00:25:24.829 user 0m8.043s 00:25:24.829 sys 0m6.720s 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.829 ************************************ 00:25:24.829 END TEST nvmf_control_msg_list 00:25:24.829 ************************************ 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:24.829 ************************************ 00:25:24.829 START TEST nvmf_wait_for_buf 00:25:24.829 ************************************ 00:25:24.829 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:25.092 * Looking for test storage... 00:25:25.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.092 --rc genhtml_branch_coverage=1 00:25:25.092 --rc genhtml_function_coverage=1 00:25:25.092 --rc genhtml_legend=1 00:25:25.092 --rc geninfo_all_blocks=1 00:25:25.092 --rc geninfo_unexecuted_blocks=1 00:25:25.092 00:25:25.092 ' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.092 --rc genhtml_branch_coverage=1 00:25:25.092 --rc genhtml_function_coverage=1 00:25:25.092 --rc genhtml_legend=1 00:25:25.092 --rc geninfo_all_blocks=1 00:25:25.092 --rc geninfo_unexecuted_blocks=1 00:25:25.092 00:25:25.092 ' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.092 --rc genhtml_branch_coverage=1 00:25:25.092 --rc genhtml_function_coverage=1 00:25:25.092 --rc genhtml_legend=1 00:25:25.092 --rc geninfo_all_blocks=1 00:25:25.092 --rc geninfo_unexecuted_blocks=1 00:25:25.092 00:25:25.092 ' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.092 --rc genhtml_branch_coverage=1 00:25:25.092 --rc genhtml_function_coverage=1 00:25:25.092 --rc genhtml_legend=1 00:25:25.092 --rc geninfo_all_blocks=1 00:25:25.092 --rc geninfo_unexecuted_blocks=1 00:25:25.092 00:25:25.092 ' 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.092 22:53:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.092 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.093 22:53:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.237 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:33.238 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:33.238 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:33.238 Found net devices under 0000:31:00.0: cvl_0_0 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:33.238 Found net devices under 0000:31:00.1: cvl_0_1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:25:33.238 00:25:33.238 --- 10.0.0.2 ping statistics --- 00:25:33.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.238 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:33.238 00:25:33.238 --- 10.0.0.1 ping statistics --- 00:25:33.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.238 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=753737 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 753737 00:25:33.238 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 753737 ']' 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.239 22:53:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.239 [2024-09-30 22:53:59.798566] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:25:33.239 [2024-09-30 22:53:59.798637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.239 [2024-09-30 22:53:59.888760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.239 [2024-09-30 22:53:59.984491] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.239 [2024-09-30 22:53:59.984552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.239 [2024-09-30 22:53:59.984560] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.239 [2024-09-30 22:53:59.984567] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.239 [2024-09-30 22:53:59.984574] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.239 [2024-09-30 22:53:59.984600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 Malloc0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 [2024-09-30 22:54:00.777141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.812 [2024-09-30 22:54:00.813462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.812 22:54:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:34.074 [2024-09-30 22:54:00.902015] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:35.461 Initializing NVMe Controllers 00:25:35.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:35.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:35.461 Initialization complete. Launching workers. 00:25:35.461 ======================================================== 00:25:35.461 Latency(us) 00:25:35.461 Device Information : IOPS MiB/s Average min max 00:25:35.461 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165836.01 47853.85 191554.73 00:25:35.461 ======================================================== 00:25:35.461 Total : 25.00 3.12 165836.01 47853.85 191554.73 00:25:35.461 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:35.461 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.462 rmmod nvme_tcp 00:25:35.462 rmmod nvme_fabrics 00:25:35.462 rmmod nvme_keyring 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 753737 ']' 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 753737 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 753737 ']' 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 753737 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.462 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 753737 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 753737' 00:25:35.724 killing process with pid 753737 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 753737 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 753737 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.724 22:54:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.275 00:25:38.275 real 0m12.948s 00:25:38.275 user 0m5.192s 00:25:38.275 sys 0m6.313s 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.275 ************************************ 00:25:38.275 END TEST nvmf_wait_for_buf 00:25:38.275 ************************************ 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.275 22:54:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:46.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:46.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:46.419 Found net devices under 0000:31:00.0: cvl_0_0 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:46.419 Found net devices under 0000:31:00.1: cvl_0_1 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:46.419 ************************************ 00:25:46.419 START TEST nvmf_perf_adq 00:25:46.419 ************************************ 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:46.419 * Looking for test storage... 00:25:46.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.419 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.420 --rc genhtml_branch_coverage=1 00:25:46.420 --rc genhtml_function_coverage=1 00:25:46.420 --rc genhtml_legend=1 00:25:46.420 --rc geninfo_all_blocks=1 00:25:46.420 --rc geninfo_unexecuted_blocks=1 00:25:46.420 00:25:46.420 ' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.420 --rc genhtml_branch_coverage=1 00:25:46.420 --rc genhtml_function_coverage=1 00:25:46.420 --rc genhtml_legend=1 00:25:46.420 --rc geninfo_all_blocks=1 00:25:46.420 --rc geninfo_unexecuted_blocks=1 00:25:46.420 00:25:46.420 ' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.420 --rc genhtml_branch_coverage=1 00:25:46.420 --rc genhtml_function_coverage=1 00:25:46.420 --rc genhtml_legend=1 00:25:46.420 --rc geninfo_all_blocks=1 00:25:46.420 --rc geninfo_unexecuted_blocks=1 00:25:46.420 00:25:46.420 ' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.420 --rc genhtml_branch_coverage=1 00:25:46.420 --rc genhtml_function_coverage=1 00:25:46.420 --rc genhtml_legend=1 00:25:46.420 --rc geninfo_all_blocks=1 00:25:46.420 --rc geninfo_unexecuted_blocks=1 00:25:46.420 00:25:46.420 ' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.420 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:53.006 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:53.006 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.006 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:53.007 Found net devices under 0000:31:00.0: cvl_0_0 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:53.007 Found net devices under 0000:31:00.1: cvl_0_1 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:53.007 22:54:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:54.973 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:56.890 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.181 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:02.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:02.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:02.182 Found net devices under 0000:31:00.0: cvl_0_0 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:02.182 Found net devices under 0000:31:00.1: cvl_0_1 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.182 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:26:02.182 00:26:02.182 --- 10.0.0.2 ping statistics --- 00:26:02.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.182 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:26:02.182 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:26:02.182 00:26:02.182 --- 10.0.0.1 ping statistics --- 00:26:02.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.182 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=764924 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 764924 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 764924 ']' 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.443 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 [2024-09-30 22:54:29.313195] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:02.443 [2024-09-30 22:54:29.313264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.443 [2024-09-30 22:54:29.405137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.704 [2024-09-30 22:54:29.502736] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.704 [2024-09-30 22:54:29.502801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.704 [2024-09-30 22:54:29.502810] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.704 [2024-09-30 22:54:29.502817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.704 [2024-09-30 22:54:29.502823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.704 [2024-09-30 22:54:29.502989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.704 [2024-09-30 22:54:29.503155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.704 [2024-09-30 22:54:29.503316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.704 [2024-09-30 22:54:29.503317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.276 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 [2024-09-30 22:54:30.357603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 Malloc1 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.538 [2024-09-30 22:54:30.423365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=765105 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:03.538 22:54:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:05.452 "tick_rate": 2400000000, 00:26:05.452 "poll_groups": [ 00:26:05.452 { 00:26:05.452 "name": "nvmf_tgt_poll_group_000", 00:26:05.452 "admin_qpairs": 1, 00:26:05.452 "io_qpairs": 1, 00:26:05.452 "current_admin_qpairs": 1, 00:26:05.452 "current_io_qpairs": 1, 00:26:05.452 "pending_bdev_io": 0, 00:26:05.452 "completed_nvme_io": 17126, 00:26:05.452 "transports": [ 00:26:05.452 { 00:26:05.452 "trtype": "TCP" 00:26:05.452 } 00:26:05.452 ] 00:26:05.452 }, 00:26:05.452 { 00:26:05.452 "name": "nvmf_tgt_poll_group_001", 00:26:05.452 "admin_qpairs": 0, 00:26:05.452 "io_qpairs": 1, 00:26:05.452 "current_admin_qpairs": 0, 00:26:05.452 "current_io_qpairs": 1, 00:26:05.452 "pending_bdev_io": 0, 00:26:05.452 "completed_nvme_io": 19043, 00:26:05.452 "transports": [ 00:26:05.452 { 00:26:05.452 "trtype": "TCP" 00:26:05.452 } 00:26:05.452 ] 00:26:05.452 }, 00:26:05.452 { 00:26:05.452 "name": "nvmf_tgt_poll_group_002", 00:26:05.452 "admin_qpairs": 0, 00:26:05.452 "io_qpairs": 1, 00:26:05.452 "current_admin_qpairs": 0, 00:26:05.452 "current_io_qpairs": 1, 00:26:05.452 "pending_bdev_io": 0, 00:26:05.452 "completed_nvme_io": 18866, 00:26:05.452 "transports": [ 00:26:05.452 { 00:26:05.452 "trtype": "TCP" 00:26:05.452 } 00:26:05.452 ] 00:26:05.452 }, 00:26:05.452 { 00:26:05.452 "name": "nvmf_tgt_poll_group_003", 00:26:05.452 "admin_qpairs": 0, 00:26:05.452 "io_qpairs": 1, 00:26:05.452 "current_admin_qpairs": 0, 00:26:05.452 "current_io_qpairs": 1, 00:26:05.452 "pending_bdev_io": 0, 00:26:05.452 "completed_nvme_io": 17688, 00:26:05.452 "transports": [ 00:26:05.452 { 00:26:05.452 "trtype": "TCP" 00:26:05.452 } 00:26:05.452 ] 00:26:05.452 } 00:26:05.452 ] 00:26:05.452 }' 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:05.452 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:05.712 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:05.713 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:05.713 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 765105 00:26:13.845 Initializing NVMe Controllers 00:26:13.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:13.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:13.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:13.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:13.845 Initialization complete. Launching workers. 00:26:13.845 ======================================================== 00:26:13.845 Latency(us) 00:26:13.845 Device Information : IOPS MiB/s Average min max 00:26:13.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12992.09 50.75 4926.19 1331.40 11272.72 00:26:13.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14031.37 54.81 4560.56 954.84 11705.66 00:26:13.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13806.17 53.93 4644.43 1465.84 44007.36 00:26:13.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13393.88 52.32 4783.73 1181.58 42918.16 00:26:13.845 ======================================================== 00:26:13.845 Total : 54223.52 211.81 4724.64 954.84 44007.36 00:26:13.845 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.845 rmmod nvme_tcp 00:26:13.845 rmmod nvme_fabrics 00:26:13.845 rmmod nvme_keyring 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 764924 ']' 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 764924 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 764924 ']' 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 764924 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 764924 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 764924' 00:26:13.845 killing process with pid 764924 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 764924 00:26:13.845 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 764924 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.106 22:54:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.018 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:16.018 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:16.018 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:16.018 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:17.932 22:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:19.851 22:54:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:25.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:25.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:25.142 Found net devices under 0000:31:00.0: cvl_0_0 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.142 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:25.142 Found net devices under 0000:31:00.1: cvl_0_1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.143 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:25.143 00:26:25.143 --- 10.0.0.2 ping statistics --- 00:26:25.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.143 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:26:25.143 00:26:25.143 --- 10.0.0.1 ping statistics --- 00:26:25.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.143 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:25.143 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:25.404 net.core.busy_poll = 1 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:25.404 net.core.busy_read = 1 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:25.404 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.665 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=769741 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 769741 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 769741 ']' 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.666 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.666 [2024-09-30 22:54:52.537265] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:25.666 [2024-09-30 22:54:52.537330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.666 [2024-09-30 22:54:52.629286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.927 [2024-09-30 22:54:52.726541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.927 [2024-09-30 22:54:52.726600] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.927 [2024-09-30 22:54:52.726609] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.927 [2024-09-30 22:54:52.726616] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.927 [2024-09-30 22:54:52.726623] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.927 [2024-09-30 22:54:52.727271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.927 [2024-09-30 22:54:52.727397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.927 [2024-09-30 22:54:52.727556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.927 [2024-09-30 22:54:52.727557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.499 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 [2024-09-30 22:54:53.577706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 Malloc1 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.844 [2024-09-30 22:54:53.643633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=770057 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:26.844 22:54:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:28.839 "tick_rate": 2400000000, 00:26:28.839 "poll_groups": [ 00:26:28.839 { 00:26:28.839 "name": "nvmf_tgt_poll_group_000", 00:26:28.839 "admin_qpairs": 1, 00:26:28.839 "io_qpairs": 2, 00:26:28.839 "current_admin_qpairs": 1, 00:26:28.839 "current_io_qpairs": 2, 00:26:28.839 "pending_bdev_io": 0, 00:26:28.839 "completed_nvme_io": 25650, 00:26:28.839 "transports": [ 00:26:28.839 { 00:26:28.839 "trtype": "TCP" 00:26:28.839 } 00:26:28.839 ] 00:26:28.839 }, 00:26:28.839 { 00:26:28.839 "name": "nvmf_tgt_poll_group_001", 00:26:28.839 "admin_qpairs": 0, 00:26:28.839 "io_qpairs": 2, 00:26:28.839 "current_admin_qpairs": 0, 00:26:28.839 "current_io_qpairs": 2, 00:26:28.839 "pending_bdev_io": 0, 00:26:28.839 "completed_nvme_io": 27992, 00:26:28.839 "transports": [ 00:26:28.839 { 00:26:28.839 "trtype": "TCP" 00:26:28.839 } 00:26:28.839 ] 00:26:28.839 }, 00:26:28.839 { 00:26:28.839 "name": "nvmf_tgt_poll_group_002", 00:26:28.839 "admin_qpairs": 0, 00:26:28.839 "io_qpairs": 0, 00:26:28.839 "current_admin_qpairs": 0, 00:26:28.839 "current_io_qpairs": 0, 00:26:28.839 "pending_bdev_io": 0, 00:26:28.839 "completed_nvme_io": 0, 00:26:28.839 "transports": [ 00:26:28.839 { 00:26:28.839 "trtype": "TCP" 00:26:28.839 } 00:26:28.839 ] 00:26:28.839 }, 00:26:28.839 { 00:26:28.839 "name": "nvmf_tgt_poll_group_003", 00:26:28.839 "admin_qpairs": 0, 00:26:28.839 "io_qpairs": 0, 00:26:28.839 "current_admin_qpairs": 0, 00:26:28.839 "current_io_qpairs": 0, 00:26:28.839 "pending_bdev_io": 0, 00:26:28.839 "completed_nvme_io": 0, 00:26:28.839 "transports": [ 00:26:28.839 { 00:26:28.839 "trtype": "TCP" 00:26:28.839 } 00:26:28.839 ] 00:26:28.839 } 00:26:28.839 ] 00:26:28.839 }' 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:28.839 22:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 770057 00:26:36.982 Initializing NVMe Controllers 00:26:36.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:36.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:36.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:36.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:36.982 Initialization complete. Launching workers. 00:26:36.982 ======================================================== 00:26:36.982 Latency(us) 00:26:36.982 Device Information : IOPS MiB/s Average min max 00:26:36.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9799.10 38.28 6531.48 1227.46 53074.76 00:26:36.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10003.80 39.08 6397.42 1201.06 54494.30 00:26:36.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8222.00 32.12 7784.33 890.82 52601.97 00:26:36.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8519.40 33.28 7534.13 959.01 52633.11 00:26:36.982 ======================================================== 00:26:36.982 Total : 36544.30 142.75 7010.40 890.82 54494.30 00:26:36.982 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:36.982 rmmod nvme_tcp 00:26:36.982 rmmod nvme_fabrics 00:26:36.982 rmmod nvme_keyring 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 769741 ']' 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 769741 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 769741 ']' 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 769741 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 769741 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 769741' 00:26:36.982 killing process with pid 769741 00:26:36.982 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 769741 00:26:36.983 22:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 769741 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.244 22:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:40.543 00:26:40.543 real 0m55.027s 00:26:40.543 user 2m49.721s 00:26:40.543 sys 0m11.966s 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.543 ************************************ 00:26:40.543 END TEST nvmf_perf_adq 00:26:40.543 ************************************ 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:40.543 ************************************ 00:26:40.543 START TEST nvmf_shutdown 00:26:40.543 ************************************ 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:40.543 * Looking for test storage... 00:26:40.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.543 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.544 --rc genhtml_branch_coverage=1 00:26:40.544 --rc genhtml_function_coverage=1 00:26:40.544 --rc genhtml_legend=1 00:26:40.544 --rc geninfo_all_blocks=1 00:26:40.544 --rc geninfo_unexecuted_blocks=1 00:26:40.544 00:26:40.544 ' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.544 --rc genhtml_branch_coverage=1 00:26:40.544 --rc genhtml_function_coverage=1 00:26:40.544 --rc genhtml_legend=1 00:26:40.544 --rc geninfo_all_blocks=1 00:26:40.544 --rc geninfo_unexecuted_blocks=1 00:26:40.544 00:26:40.544 ' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.544 --rc genhtml_branch_coverage=1 00:26:40.544 --rc genhtml_function_coverage=1 00:26:40.544 --rc genhtml_legend=1 00:26:40.544 --rc geninfo_all_blocks=1 00:26:40.544 --rc geninfo_unexecuted_blocks=1 00:26:40.544 00:26:40.544 ' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.544 --rc genhtml_branch_coverage=1 00:26:40.544 --rc genhtml_function_coverage=1 00:26:40.544 --rc genhtml_legend=1 00:26:40.544 --rc geninfo_all_blocks=1 00:26:40.544 --rc geninfo_unexecuted_blocks=1 00:26:40.544 00:26:40.544 ' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.544 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.806 ************************************ 00:26:40.806 START TEST nvmf_shutdown_tc1 00:26:40.806 ************************************ 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.806 22:55:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:48.945 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:48.946 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:48.946 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:48.946 Found net devices under 0000:31:00.0: cvl_0_0 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:48.946 Found net devices under 0000:31:00.1: cvl_0_1 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.946 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:26:48.947 00:26:48.947 --- 10.0.0.2 ping statistics --- 00:26:48.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.947 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:26:48.947 00:26:48.947 --- 10.0.0.1 ping statistics --- 00:26:48.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.947 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=776637 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 776637 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 776637 ']' 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.947 22:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:48.947 [2024-09-30 22:55:15.463226] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:48.947 [2024-09-30 22:55:15.463294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.947 [2024-09-30 22:55:15.532442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.947 [2024-09-30 22:55:15.618335] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.947 [2024-09-30 22:55:15.618394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.947 [2024-09-30 22:55:15.618400] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.947 [2024-09-30 22:55:15.618405] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.947 [2024-09-30 22:55:15.618410] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.947 [2024-09-30 22:55:15.618572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.947 [2024-09-30 22:55:15.618733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.947 [2024-09-30 22:55:15.618902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.947 [2024-09-30 22:55:15.618912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:49.521 [2024-09-30 22:55:16.385953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.521 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:49.521 Malloc1 00:26:49.521 [2024-09-30 22:55:16.499517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.521 Malloc2 00:26:49.782 Malloc3 00:26:49.782 Malloc4 00:26:49.782 Malloc5 00:26:49.782 Malloc6 00:26:49.782 Malloc7 00:26:50.043 Malloc8 00:26:50.043 Malloc9 00:26:50.043 Malloc10 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=777021 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 777021 /var/tmp/bdevperf.sock 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 777021 ']' 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.043 { 00:26:50.043 "params": { 00:26:50.043 "name": "Nvme$subsystem", 00:26:50.043 "trtype": "$TEST_TRANSPORT", 00:26:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.043 "adrfam": "ipv4", 00:26:50.043 "trsvcid": "$NVMF_PORT", 00:26:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.043 "hdgst": ${hdgst:-false}, 00:26:50.043 "ddgst": ${ddgst:-false} 00:26:50.043 }, 00:26:50.043 "method": "bdev_nvme_attach_controller" 00:26:50.043 } 00:26:50.043 EOF 00:26:50.043 )") 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.043 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.043 { 00:26:50.043 "params": { 00:26:50.043 "name": "Nvme$subsystem", 00:26:50.043 "trtype": "$TEST_TRANSPORT", 00:26:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.043 "adrfam": "ipv4", 00:26:50.043 "trsvcid": "$NVMF_PORT", 00:26:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.043 "hdgst": ${hdgst:-false}, 00:26:50.043 "ddgst": ${ddgst:-false} 00:26:50.043 }, 00:26:50.043 "method": "bdev_nvme_attach_controller" 00:26:50.043 } 00:26:50.043 EOF 00:26:50.043 )") 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 [2024-09-30 22:55:17.022819] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 [2024-09-30 22:55:17.022891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.044 "hdgst": ${hdgst:-false}, 00:26:50.044 "ddgst": ${ddgst:-false} 00:26:50.044 }, 00:26:50.044 "method": "bdev_nvme_attach_controller" 00:26:50.044 } 00:26:50.044 EOF 00:26:50.044 )") 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:50.044 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:50.044 { 00:26:50.044 "params": { 00:26:50.044 "name": "Nvme$subsystem", 00:26:50.044 "trtype": "$TEST_TRANSPORT", 00:26:50.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.044 "adrfam": "ipv4", 00:26:50.044 "trsvcid": "$NVMF_PORT", 00:26:50.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.045 "hdgst": ${hdgst:-false}, 00:26:50.045 "ddgst": ${ddgst:-false} 00:26:50.045 }, 00:26:50.045 "method": "bdev_nvme_attach_controller" 00:26:50.045 } 00:26:50.045 EOF 00:26:50.045 )") 00:26:50.045 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:50.045 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:26:50.305 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:26:50.305 22:55:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:26:50.305 "params": { 00:26:50.305 "name": "Nvme1", 00:26:50.305 "trtype": "tcp", 00:26:50.305 "traddr": "10.0.0.2", 00:26:50.305 "adrfam": "ipv4", 00:26:50.305 "trsvcid": "4420", 00:26:50.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme2", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme3", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme4", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme5", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme6", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme7", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme8", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme9", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 },{ 00:26:50.306 "params": { 00:26:50.306 "name": "Nvme10", 00:26:50.306 "trtype": "tcp", 00:26:50.306 "traddr": "10.0.0.2", 00:26:50.306 "adrfam": "ipv4", 00:26:50.306 "trsvcid": "4420", 00:26:50.306 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:50.306 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:50.306 "hdgst": false, 00:26:50.306 "ddgst": false 00:26:50.306 }, 00:26:50.306 "method": "bdev_nvme_attach_controller" 00:26:50.306 }' 00:26:50.306 [2024-09-30 22:55:17.108912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.306 [2024-09-30 22:55:17.206144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.692 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 777021 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:51.693 22:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:52.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 777021 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 776637 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.633 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.633 { 00:26:52.633 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 [2024-09-30 22:55:19.556005] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:52.634 [2024-09-30 22:55:19.556059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777395 ] 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.634 "hdgst": ${hdgst:-false}, 00:26:52.634 "ddgst": ${ddgst:-false} 00:26:52.634 }, 00:26:52.634 "method": "bdev_nvme_attach_controller" 00:26:52.634 } 00:26:52.634 EOF 00:26:52.634 )") 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.634 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.634 { 00:26:52.634 "params": { 00:26:52.634 "name": "Nvme$subsystem", 00:26:52.634 "trtype": "$TEST_TRANSPORT", 00:26:52.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.634 "adrfam": "ipv4", 00:26:52.634 "trsvcid": "$NVMF_PORT", 00:26:52.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.635 "hdgst": ${hdgst:-false}, 00:26:52.635 "ddgst": ${ddgst:-false} 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 } 00:26:52.635 EOF 00:26:52.635 )") 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:52.635 { 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme$subsystem", 00:26:52.635 "trtype": "$TEST_TRANSPORT", 00:26:52.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "$NVMF_PORT", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.635 "hdgst": ${hdgst:-false}, 00:26:52.635 "ddgst": ${ddgst:-false} 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 } 00:26:52.635 EOF 00:26:52.635 )") 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:26:52.635 22:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme1", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme2", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme3", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme4", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme5", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme6", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme7", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme8", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme9", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 },{ 00:26:52.635 "params": { 00:26:52.635 "name": "Nvme10", 00:26:52.635 "trtype": "tcp", 00:26:52.635 "traddr": "10.0.0.2", 00:26:52.635 "adrfam": "ipv4", 00:26:52.635 "trsvcid": "4420", 00:26:52.635 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:52.635 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:52.635 "hdgst": false, 00:26:52.635 "ddgst": false 00:26:52.635 }, 00:26:52.635 "method": "bdev_nvme_attach_controller" 00:26:52.635 }' 00:26:52.635 [2024-09-30 22:55:19.637858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.897 [2024-09-30 22:55:19.702760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.839 Running I/O for 1 seconds... 00:26:55.223 1872.00 IOPS, 117.00 MiB/s 00:26:55.223 Latency(us) 00:26:55.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme1n1 : 1.10 237.88 14.87 0.00 0.00 259156.53 13489.49 237677.23 00:26:55.223 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme2n1 : 1.07 239.60 14.97 0.00 0.00 258899.63 19660.80 241172.48 00:26:55.223 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme3n1 : 1.16 225.42 14.09 0.00 0.00 259506.30 8410.45 246415.36 00:26:55.223 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme4n1 : 1.17 272.68 17.04 0.00 0.00 219450.88 19005.44 244667.73 00:26:55.223 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme5n1 : 1.11 230.32 14.40 0.00 0.00 256072.32 19442.35 222822.40 00:26:55.223 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme6n1 : 1.14 225.22 14.08 0.00 0.00 253736.53 23046.83 246415.36 00:26:55.223 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme7n1 : 1.18 271.85 16.99 0.00 0.00 210327.98 10594.99 246415.36 00:26:55.223 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme8n1 : 1.18 268.73 16.80 0.00 0.00 208861.57 13489.49 244667.73 00:26:55.223 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme9n1 : 1.17 218.35 13.65 0.00 0.00 252341.76 16930.13 265639.25 00:26:55.223 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.223 Verification LBA range: start 0x0 length 0x400 00:26:55.223 Nvme10n1 : 1.19 269.36 16.84 0.00 0.00 201353.90 10540.37 251658.24 00:26:55.223 =================================================================================================================== 00:26:55.223 Total : 2459.41 153.71 0.00 0.00 235548.12 8410.45 265639.25 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.223 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.223 rmmod nvme_tcp 00:26:55.484 rmmod nvme_fabrics 00:26:55.484 rmmod nvme_keyring 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 776637 ']' 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 776637 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 776637 ']' 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 776637 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776637 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776637' 00:26:55.484 killing process with pid 776637 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 776637 00:26:55.484 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 776637 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.745 22:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.293 00:26:58.293 real 0m17.107s 00:26:58.293 user 0m33.921s 00:26:58.293 sys 0m7.106s 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.293 ************************************ 00:26:58.293 END TEST nvmf_shutdown_tc1 00:26:58.293 ************************************ 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:58.293 ************************************ 00:26:58.293 START TEST nvmf_shutdown_tc2 00:26:58.293 ************************************ 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:58.293 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:58.294 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:58.294 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:58.294 Found net devices under 0000:31:00.0: cvl_0_0 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:58.294 Found net devices under 0000:31:00.1: cvl_0_1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.294 22:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.294 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.294 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.294 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.294 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:58.294 00:26:58.294 --- 10.0.0.2 ping statistics --- 00:26:58.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.294 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:26:58.295 00:26:58.295 --- 10.0.0.1 ping statistics --- 00:26:58.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.295 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=778665 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 778665 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 778665 ']' 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.295 22:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.295 [2024-09-30 22:55:25.245069] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:58.295 [2024-09-30 22:55:25.245136] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.555 [2024-09-30 22:55:25.334486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.555 [2024-09-30 22:55:25.396131] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.555 [2024-09-30 22:55:25.396168] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.555 [2024-09-30 22:55:25.396177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.555 [2024-09-30 22:55:25.396182] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.556 [2024-09-30 22:55:25.396186] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.556 [2024-09-30 22:55:25.396326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.556 [2024-09-30 22:55:25.396480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.556 [2024-09-30 22:55:25.396635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.556 [2024-09-30 22:55:25.396636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.175 [2024-09-30 22:55:26.100321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.175 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.176 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.176 Malloc1 00:26:59.436 [2024-09-30 22:55:26.198996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.436 Malloc2 00:26:59.436 Malloc3 00:26:59.436 Malloc4 00:26:59.436 Malloc5 00:26:59.436 Malloc6 00:26:59.436 Malloc7 00:26:59.436 Malloc8 00:26:59.698 Malloc9 00:26:59.698 Malloc10 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=778903 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 778903 /var/tmp/bdevperf.sock 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 778903 ']' 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.698 EOF 00:26:59.698 )") 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.698 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.698 { 00:26:59.698 "params": { 00:26:59.698 "name": "Nvme$subsystem", 00:26:59.698 "trtype": "$TEST_TRANSPORT", 00:26:59.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.698 "adrfam": "ipv4", 00:26:59.698 "trsvcid": "$NVMF_PORT", 00:26:59.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.698 "hdgst": ${hdgst:-false}, 00:26:59.698 "ddgst": ${ddgst:-false} 00:26:59.698 }, 00:26:59.698 "method": "bdev_nvme_attach_controller" 00:26:59.698 } 00:26:59.699 EOF 00:26:59.699 )") 00:26:59.699 [2024-09-30 22:55:26.644303] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:26:59.699 [2024-09-30 22:55:26.644358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778903 ] 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.699 { 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme$subsystem", 00:26:59.699 "trtype": "$TEST_TRANSPORT", 00:26:59.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "$NVMF_PORT", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.699 "hdgst": ${hdgst:-false}, 00:26:59.699 "ddgst": ${ddgst:-false} 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 } 00:26:59.699 EOF 00:26:59.699 )") 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.699 { 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme$subsystem", 00:26:59.699 "trtype": "$TEST_TRANSPORT", 00:26:59.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "$NVMF_PORT", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.699 "hdgst": ${hdgst:-false}, 00:26:59.699 "ddgst": ${ddgst:-false} 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 } 00:26:59.699 EOF 00:26:59.699 )") 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:59.699 { 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme$subsystem", 00:26:59.699 "trtype": "$TEST_TRANSPORT", 00:26:59.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "$NVMF_PORT", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.699 "hdgst": ${hdgst:-false}, 00:26:59.699 "ddgst": ${ddgst:-false} 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 } 00:26:59.699 EOF 00:26:59.699 )") 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:26:59.699 22:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme1", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme2", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme3", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme4", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme5", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme6", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme7", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme8", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme9", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 },{ 00:26:59.699 "params": { 00:26:59.699 "name": "Nvme10", 00:26:59.699 "trtype": "tcp", 00:26:59.699 "traddr": "10.0.0.2", 00:26:59.699 "adrfam": "ipv4", 00:26:59.699 "trsvcid": "4420", 00:26:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:59.699 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:59.699 "hdgst": false, 00:26:59.699 "ddgst": false 00:26:59.699 }, 00:26:59.699 "method": "bdev_nvme_attach_controller" 00:26:59.699 }' 00:26:59.960 [2024-09-30 22:55:26.722513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.960 [2024-09-30 22:55:26.787732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.345 Running I/O for 10 seconds... 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:01.345 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:01.605 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:01.605 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:01.606 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:01.606 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:01.606 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.606 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.866 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.866 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=72 00:27:01.866 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:27:01.866 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 778903 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 778903 ']' 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 778903 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.130 22:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778903 00:27:02.130 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.130 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.130 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778903' 00:27:02.130 killing process with pid 778903 00:27:02.130 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 778903 00:27:02.130 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 778903 00:27:02.130 Received shutdown signal, test time was about 0.977823 seconds 00:27:02.130 00:27:02.130 Latency(us) 00:27:02.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.130 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme1n1 : 0.97 269.39 16.84 0.00 0.00 234315.29 3112.96 232434.35 00:27:02.130 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme2n1 : 0.98 262.04 16.38 0.00 0.00 236077.44 8137.39 249910.61 00:27:02.130 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme3n1 : 0.95 272.42 17.03 0.00 0.00 221619.96 6280.53 249910.61 00:27:02.130 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme4n1 : 0.95 208.42 13.03 0.00 0.00 283674.13 2785.28 251658.24 00:27:02.130 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme5n1 : 0.95 201.51 12.59 0.00 0.00 288559.50 21517.65 255153.49 00:27:02.130 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme6n1 : 0.97 268.85 16.80 0.00 0.00 211481.89 3440.64 244667.73 00:27:02.130 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme7n1 : 0.96 266.00 16.62 0.00 0.00 209415.04 19988.48 255153.49 00:27:02.130 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme8n1 : 0.97 263.50 16.47 0.00 0.00 206918.19 16165.55 248162.99 00:27:02.130 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme9n1 : 0.94 208.73 13.05 0.00 0.00 252263.31 1952.43 244667.73 00:27:02.130 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.130 Verification LBA range: start 0x0 length 0x400 00:27:02.130 Nvme10n1 : 0.96 200.31 12.52 0.00 0.00 259066.03 18677.76 269134.51 00:27:02.130 =================================================================================================================== 00:27:02.130 Total : 2421.17 151.32 0.00 0.00 237015.51 1952.43 269134.51 00:27:02.391 22:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 778665 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.333 rmmod nvme_tcp 00:27:03.333 rmmod nvme_fabrics 00:27:03.333 rmmod nvme_keyring 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 778665 ']' 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 778665 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 778665 ']' 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 778665 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.333 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778665 00:27:03.593 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:03.593 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:03.593 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778665' 00:27:03.593 killing process with pid 778665 00:27:03.593 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 778665 00:27:03.593 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 778665 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.854 22:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.767 00:27:05.767 real 0m7.927s 00:27:05.767 user 0m23.697s 00:27:05.767 sys 0m1.300s 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.767 ************************************ 00:27:05.767 END TEST nvmf_shutdown_tc2 00:27:05.767 ************************************ 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.767 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.029 ************************************ 00:27:06.029 START TEST nvmf_shutdown_tc3 00:27:06.029 ************************************ 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:06.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:06.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:06.029 Found net devices under 0000:31:00.0: cvl_0_0 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.029 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:06.030 Found net devices under 0000:31:00.1: cvl_0_1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.030 22:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:27:06.291 00:27:06.291 --- 10.0.0.2 ping statistics --- 00:27:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.291 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:27:06.291 00:27:06.291 --- 10.0.0.1 ping statistics --- 00:27:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.291 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=780352 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 780352 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 780352 ']' 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.291 22:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:06.291 [2024-09-30 22:55:33.241414] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:06.291 [2024-09-30 22:55:33.241467] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.551 [2024-09-30 22:55:33.326671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.551 [2024-09-30 22:55:33.393109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.551 [2024-09-30 22:55:33.393147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.552 [2024-09-30 22:55:33.393152] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.552 [2024-09-30 22:55:33.393161] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.552 [2024-09-30 22:55:33.393165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.552 [2024-09-30 22:55:33.393310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.552 [2024-09-30 22:55:33.393465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.552 [2024-09-30 22:55:33.393618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.552 [2024-09-30 22:55:33.393620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.125 [2024-09-30 22:55:34.068204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.125 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.126 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.386 Malloc1 00:27:07.386 [2024-09-30 22:55:34.167004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.386 Malloc2 00:27:07.386 Malloc3 00:27:07.386 Malloc4 00:27:07.386 Malloc5 00:27:07.386 Malloc6 00:27:07.386 Malloc7 00:27:07.647 Malloc8 00:27:07.647 Malloc9 00:27:07.647 Malloc10 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=780738 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 780738 /var/tmp/bdevperf.sock 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 780738 ']' 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.647 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 [2024-09-30 22:55:34.618462] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:07.648 [2024-09-30 22:55:34.618519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780738 ] 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.648 } 00:27:07.648 EOF 00:27:07.648 )") 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:07.648 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:07.648 { 00:27:07.648 "params": { 00:27:07.648 "name": "Nvme$subsystem", 00:27:07.648 "trtype": "$TEST_TRANSPORT", 00:27:07.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.648 "adrfam": "ipv4", 00:27:07.648 "trsvcid": "$NVMF_PORT", 00:27:07.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.648 "hdgst": ${hdgst:-false}, 00:27:07.648 "ddgst": ${ddgst:-false} 00:27:07.648 }, 00:27:07.648 "method": "bdev_nvme_attach_controller" 00:27:07.649 } 00:27:07.649 EOF 00:27:07.649 )") 00:27:07.649 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:27:07.649 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:27:07.649 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:27:07.649 22:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme1", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme2", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme3", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme4", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme5", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme6", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme7", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme8", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme9", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 },{ 00:27:07.649 "params": { 00:27:07.649 "name": "Nvme10", 00:27:07.649 "trtype": "tcp", 00:27:07.649 "traddr": "10.0.0.2", 00:27:07.649 "adrfam": "ipv4", 00:27:07.649 "trsvcid": "4420", 00:27:07.649 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:07.649 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:07.649 "hdgst": false, 00:27:07.649 "ddgst": false 00:27:07.649 }, 00:27:07.649 "method": "bdev_nvme_attach_controller" 00:27:07.649 }' 00:27:07.909 [2024-09-30 22:55:34.699801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.909 [2024-09-30 22:55:34.764913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.292 Running I/O for 10 seconds... 00:27:09.292 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.292 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:09.292 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:09.292 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.292 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:09.553 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:09.813 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 780352 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 780352 ']' 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 780352 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.092 22:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780352 00:27:10.092 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.092 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.092 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780352' 00:27:10.092 killing process with pid 780352 00:27:10.092 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 780352 00:27:10.092 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 780352 00:27:10.092 [2024-09-30 22:55:37.034312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2216fb0 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.092 [2024-09-30 22:55:37.035339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.035428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218b60 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.036494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217480 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.093 [2024-09-30 22:55:37.037678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.037914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217950 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.038753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2217e40 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.039708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218310 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.094 [2024-09-30 22:55:37.040521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.040777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8650 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.095 [2024-09-30 22:55:37.041734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.041892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa89d0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.096 [2024-09-30 22:55:37.042696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.042816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8ea0 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.043550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.097 [2024-09-30 22:55:37.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.044983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.044995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.098 [2024-09-30 22:55:37.045347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.098 [2024-09-30 22:55:37.045355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.099 [2024-09-30 22:55:37.045780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.045806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.099 [2024-09-30 22:55:37.045851] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc252b0 was disconnected and freed. reset controller. 00:27:10.099 [2024-09-30 22:55:37.048062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53030 is same with the state(6) to be set 00:27:10.099 [2024-09-30 22:55:37.048172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8281d0 is same with the state(6) to be set 00:27:10.099 [2024-09-30 22:55:37.048258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.099 [2024-09-30 22:55:37.048300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.099 [2024-09-30 22:55:37.048308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824210 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc48680 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81ef0 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78b40 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73e610 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825d40 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.048795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.100 [2024-09-30 22:55:37.048850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.048858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81ec10 is same with the state(6) to be set 00:27:10.100 [2024-09-30 22:55:37.049672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.100 [2024-09-30 22:55:37.049870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.100 [2024-09-30 22:55:37.049877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.049989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.049999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.050274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.050282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.052572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.052621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218690 is same with the state(6) to be set 00:27:10.101 [2024-09-30 22:55:37.061694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.101 [2024-09-30 22:55:37.061842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.101 [2024-09-30 22:55:37.061851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.061981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.061993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.062268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.062348] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd4dde0 was disconnected and freed. reset controller. 00:27:10.102 [2024-09-30 22:55:37.063913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:10.102 [2024-09-30 22:55:37.063950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825d40 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.063986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53030 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8281d0 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824210 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc48680 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81ef0 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.102 [2024-09-30 22:55:37.064105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.102 [2024-09-30 22:55:37.064124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.102 [2024-09-30 22:55:37.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.102 [2024-09-30 22:55:37.064160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc794e0 is same with the state(6) to be set 00:27:10.102 [2024-09-30 22:55:37.064189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78b40 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73e610 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81ec10 (9): Bad file descriptor 00:27:10.102 [2024-09-30 22:55:37.064257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.102 [2024-09-30 22:55:37.064363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.102 [2024-09-30 22:55:37.064373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.064991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.064998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.065008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.065015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.065024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.103 [2024-09-30 22:55:37.065041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.103 [2024-09-30 22:55:37.065049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.065410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.065465] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd4c9b0 was disconnected and freed. reset controller. 00:27:10.104 [2024-09-30 22:55:37.067013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:10.104 [2024-09-30 22:55:37.068767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:10.104 [2024-09-30 22:55:37.069270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-09-30 22:55:37.069313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825d40 with addr=10.0.0.2, port=4420 00:27:10.104 [2024-09-30 22:55:37.069326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825d40 is same with the state(6) to be set 00:27:10.104 [2024-09-30 22:55:37.069687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-09-30 22:55:37.069701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81ec10 with addr=10.0.0.2, port=4420 00:27:10.104 [2024-09-30 22:55:37.069709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81ec10 is same with the state(6) to be set 00:27:10.104 [2024-09-30 22:55:37.070055] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070115] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070154] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070190] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070232] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070313] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.104 [2024-09-30 22:55:37.070567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-09-30 22:55:37.070583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8281d0 with addr=10.0.0.2, port=4420 00:27:10.104 [2024-09-30 22:55:37.070591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8281d0 is same with the state(6) to be set 00:27:10.104 [2024-09-30 22:55:37.070604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825d40 (9): Bad file descriptor 00:27:10.104 [2024-09-30 22:55:37.070615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81ec10 (9): Bad file descriptor 00:27:10.104 [2024-09-30 22:55:37.070942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.070957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.070974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.070982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.070992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.104 [2024-09-30 22:55:37.071134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.104 [2024-09-30 22:55:37.071145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.105 [2024-09-30 22:55:37.071802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.105 [2024-09-30 22:55:37.071812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.071984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.071992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.072118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.072126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27cb0 is same with the state(6) to be set 00:27:10.106 [2024-09-30 22:55:37.072185] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc27cb0 was disconnected and freed. reset controller. 00:27:10.106 [2024-09-30 22:55:37.072245] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.106 [2024-09-30 22:55:37.072277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8281d0 (9): Bad file descriptor 00:27:10.106 [2024-09-30 22:55:37.072289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:10.106 [2024-09-30 22:55:37.072296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:10.106 [2024-09-30 22:55:37.072305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:10.106 [2024-09-30 22:55:37.072319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:10.106 [2024-09-30 22:55:37.072326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:10.106 [2024-09-30 22:55:37.072333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:10.106 [2024-09-30 22:55:37.073609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.106 [2024-09-30 22:55:37.073624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.106 [2024-09-30 22:55:37.073633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:10.106 [2024-09-30 22:55:37.073660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:10.106 [2024-09-30 22:55:37.073668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:10.106 [2024-09-30 22:55:37.073677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:10.106 [2024-09-30 22:55:37.073742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.106 [2024-09-30 22:55:37.074227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-09-30 22:55:37.074267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53030 with addr=10.0.0.2, port=4420 00:27:10.106 [2024-09-30 22:55:37.074280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53030 is same with the state(6) to be set 00:27:10.106 [2024-09-30 22:55:37.074601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53030 (9): Bad file descriptor 00:27:10.106 [2024-09-30 22:55:37.074646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc794e0 (9): Bad file descriptor 00:27:10.106 [2024-09-30 22:55:37.074741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:10.106 [2024-09-30 22:55:37.074753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:10.106 [2024-09-30 22:55:37.074763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:10.106 [2024-09-30 22:55:37.074811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.106 [2024-09-30 22:55:37.074986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.106 [2024-09-30 22:55:37.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.107 [2024-09-30 22:55:37.075715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.107 [2024-09-30 22:55:37.075726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.075983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.076000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.076009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.076027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.076037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.076046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.076054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17e50 is same with the state(6) to be set 00:27:10.108 [2024-09-30 22:55:37.077346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.108 [2024-09-30 22:55:37.077735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.108 [2024-09-30 22:55:37.077745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.077998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.109 [2024-09-30 22:55:37.078459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.109 [2024-09-30 22:55:37.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.078478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.078487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.078496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.078504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.078514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.078522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.078532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.078540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.078549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc26780 is same with the state(6) to be set 00:27:10.110 [2024-09-30 22:55:37.079830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.079986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.079993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.110 [2024-09-30 22:55:37.080374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.110 [2024-09-30 22:55:37.080384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.080994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.081013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.081022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.081030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc29230 is same with the state(6) to be set 00:27:10.111 [2024-09-30 22:55:37.082301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.082317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.082331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.111 [2024-09-30 22:55:37.082352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.111 [2024-09-30 22:55:37.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.082991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.082999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.083008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.083017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.083026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.083035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.112 [2024-09-30 22:55:37.083053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.112 [2024-09-30 22:55:37.083063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.083470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.083479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a7b0 is same with the state(6) to be set 00:27:10.113 [2024-09-30 22:55:37.084768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.084984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.084992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.085002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.085020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.085038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.113 [2024-09-30 22:55:37.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.113 [2024-09-30 22:55:37.085056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.114 [2024-09-30 22:55:37.085743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.114 [2024-09-30 22:55:37.085754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.085955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.085964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2d170 is same with the state(6) to be set 00:27:10.115 [2024-09-30 22:55:37.088062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.115 [2024-09-30 22:55:37.088087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:10.115 [2024-09-30 22:55:37.088099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:10.115 [2024-09-30 22:55:37.088108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:10.115 [2024-09-30 22:55:37.088186] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.115 [2024-09-30 22:55:37.088206] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.115 [2024-09-30 22:55:37.088276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:10.115 [2024-09-30 22:55:37.088288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:10.115 [2024-09-30 22:55:37.088723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-09-30 22:55:37.088739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824210 with addr=10.0.0.2, port=4420 00:27:10.115 [2024-09-30 22:55:37.088747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824210 is same with the state(6) to be set 00:27:10.115 [2024-09-30 22:55:37.088982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-09-30 22:55:37.088995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc48680 with addr=10.0.0.2, port=4420 00:27:10.115 [2024-09-30 22:55:37.089002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc48680 is same with the state(6) to be set 00:27:10.115 [2024-09-30 22:55:37.089399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-09-30 22:55:37.089410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73e610 with addr=10.0.0.2, port=4420 00:27:10.115 [2024-09-30 22:55:37.089417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73e610 is same with the state(6) to be set 00:27:10.115 [2024-09-30 22:55:37.090489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.115 [2024-09-30 22:55:37.090756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.115 [2024-09-30 22:55:37.090766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.090988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.090997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.091010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.091018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.091028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.116 [2024-09-30 22:55:37.091036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.116 [2024-09-30 22:55:37.091046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.378 [2024-09-30 22:55:37.091213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.378 [2024-09-30 22:55:37.091224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.379 [2024-09-30 22:55:37.091765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.379 [2024-09-30 22:55:37.091773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2bbf0 is same with the state(6) to be set 00:27:10.379 [2024-09-30 22:55:37.093514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:10.379 [2024-09-30 22:55:37.093538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:10.379 [2024-09-30 22:55:37.093548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:10.379 [2024-09-30 22:55:37.093558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:10.379 task offset: 24576 on job bdev=Nvme4n1 fails 00:27:10.379 00:27:10.379 Latency(us) 00:27:10.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme1n1 ended in about 0.96 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme1n1 : 0.96 200.54 12.53 66.85 0.00 236663.68 17585.49 249910.61 00:27:10.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme2n1 ended in about 0.96 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme2n1 : 0.96 200.86 12.55 66.95 0.00 231385.60 15837.87 246415.36 00:27:10.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme3n1 ended in about 0.97 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme3n1 : 0.97 198.67 12.42 66.22 0.00 229135.15 19442.35 249910.61 00:27:10.379 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme4n1 ended in about 0.95 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme4n1 : 0.95 201.51 12.59 67.17 0.00 220939.52 39103.15 232434.35 00:27:10.379 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme5n1 ended in about 0.97 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme5n1 : 0.97 132.11 8.26 66.05 0.00 293679.79 19223.89 255153.49 00:27:10.379 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.379 Job: Nvme6n1 ended in about 0.96 seconds with error 00:27:10.379 Verification LBA range: start 0x0 length 0x400 00:27:10.379 Nvme6n1 : 0.96 139.19 8.70 66.48 0.00 276452.69 18568.53 263891.63 00:27:10.379 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.380 Job: Nvme7n1 ended in about 0.97 seconds with error 00:27:10.380 Verification LBA range: start 0x0 length 0x400 00:27:10.380 Nvme7n1 : 0.97 197.65 12.35 65.88 0.00 211074.35 19879.25 249910.61 00:27:10.380 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.380 Job: Nvme8n1 ended in about 0.97 seconds with error 00:27:10.380 Verification LBA range: start 0x0 length 0x400 00:27:10.380 Nvme8n1 : 0.97 197.16 12.32 65.72 0.00 206824.53 20643.84 248162.99 00:27:10.380 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.380 Job: Nvme9n1 ended in about 0.98 seconds with error 00:27:10.380 Verification LBA range: start 0x0 length 0x400 00:27:10.380 Nvme9n1 : 0.98 130.33 8.15 65.17 0.00 272203.66 20097.71 256901.12 00:27:10.380 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.380 Job: Nvme10n1 ended in about 0.98 seconds with error 00:27:10.380 Verification LBA range: start 0x0 length 0x400 00:27:10.380 Nvme10n1 : 0.98 131.11 8.19 65.55 0.00 263787.52 18240.85 272629.76 00:27:10.380 =================================================================================================================== 00:27:10.380 Total : 1729.13 108.07 662.05 0.00 240717.01 15837.87 272629.76 00:27:10.380 [2024-09-30 22:55:37.117032] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.380 [2024-09-30 22:55:37.117064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:10.380 [2024-09-30 22:55:37.117457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.117474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc78b40 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.117484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78b40 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.117809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.117820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc81ef0 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.117827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81ef0 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.117840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824210 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.117852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc48680 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.117862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73e610 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.118316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.118332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81ec10 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.118340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81ec10 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.118688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.118699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825d40 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.118707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825d40 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.119041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.119054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8281d0 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.119062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8281d0 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.119252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.119263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53030 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.119271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53030 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.119621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.380 [2024-09-30 22:55:37.119632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc794e0 with addr=10.0.0.2, port=4420 00:27:10.380 [2024-09-30 22:55:37.119640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc794e0 is same with the state(6) to be set 00:27:10.380 [2024-09-30 22:55:37.119649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78b40 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.119664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81ef0 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.119673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.119680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.119688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:10.380 [2024-09-30 22:55:37.119700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.119707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.119714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:10.380 [2024-09-30 22:55:37.119725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.119732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.119739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:10.380 [2024-09-30 22:55:37.119769] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.380 [2024-09-30 22:55:37.119782] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.380 [2024-09-30 22:55:37.119792] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.380 [2024-09-30 22:55:37.119805] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.380 [2024-09-30 22:55:37.119816] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.380 [2024-09-30 22:55:37.120149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81ec10 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.120188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825d40 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.120198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8281d0 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.120208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53030 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.120218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc794e0 (9): Bad file descriptor 00:27:10.380 [2024-09-30 22:55:37.120227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:10.380 [2024-09-30 22:55:37.120654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:10.380 [2024-09-30 22:55:37.120661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:10.380 [2024-09-30 22:55:37.120694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.380 [2024-09-30 22:55:37.120722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.381 22:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 780738 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 780738 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 780738 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.322 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.582 rmmod nvme_tcp 00:27:11.582 rmmod nvme_fabrics 00:27:11.582 rmmod nvme_keyring 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 780352 ']' 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 780352 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 780352 ']' 00:27:11.582 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 780352 00:27:11.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (780352) - No such process 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 780352 is not found' 00:27:11.583 Process with pid 780352 is not found 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.583 22:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.512 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:13.512 00:27:13.512 real 0m7.704s 00:27:13.512 user 0m18.440s 00:27:13.512 sys 0m1.271s 00:27:13.512 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:13.512 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.512 ************************************ 00:27:13.512 END TEST nvmf_shutdown_tc3 00:27:13.512 ************************************ 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.774 ************************************ 00:27:13.774 START TEST nvmf_shutdown_tc4 00:27:13.774 ************************************ 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:13.774 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:13.774 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:13.774 Found net devices under 0000:31:00.0: cvl_0_0 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.774 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:13.775 Found net devices under 0000:31:00.1: cvl_0_1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.775 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.036 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.036 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.036 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.036 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.036 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:27:14.037 00:27:14.037 --- 10.0.0.2 ping statistics --- 00:27:14.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.037 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:27:14.037 00:27:14.037 --- 10.0.0.1 ping statistics --- 00:27:14.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.037 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=782039 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 782039 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 782039 ']' 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.037 22:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:14.299 [2024-09-30 22:55:41.055576] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:14.299 [2024-09-30 22:55:41.055646] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.299 [2024-09-30 22:55:41.147407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.299 [2024-09-30 22:55:41.220383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.299 [2024-09-30 22:55:41.220433] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.299 [2024-09-30 22:55:41.220439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.299 [2024-09-30 22:55:41.220444] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.299 [2024-09-30 22:55:41.220449] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.299 [2024-09-30 22:55:41.220605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.299 [2024-09-30 22:55:41.220765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.299 [2024-09-30 22:55:41.220940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.299 [2024-09-30 22:55:41.220942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.871 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.871 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:27:14.871 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:14.871 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.871 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:15.134 [2024-09-30 22:55:41.903909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.134 22:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:15.134 Malloc1 00:27:15.134 [2024-09-30 22:55:42.002733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.134 Malloc2 00:27:15.134 Malloc3 00:27:15.134 Malloc4 00:27:15.134 Malloc5 00:27:15.395 Malloc6 00:27:15.395 Malloc7 00:27:15.395 Malloc8 00:27:15.395 Malloc9 00:27:15.395 Malloc10 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=782264 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:15.395 22:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:15.656 [2024-09-30 22:55:42.471091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 782039 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 782039 ']' 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 782039 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782039 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782039' 00:27:20.952 killing process with pid 782039 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 782039 00:27:20.952 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 782039 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 starting I/O failed: -6 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 starting I/O failed: -6 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 starting I/O failed: -6 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.952 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 [2024-09-30 22:55:47.479889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.953 starting I/O failed: -6 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 [2024-09-30 22:55:47.480797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 [2024-09-30 22:55:47.481731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.953 Write completed with error (sct=0, sc=8) 00:27:20.953 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 [2024-09-30 22:55:47.482575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with tWrite completed with error (sct=0, sc=8) 00:27:20.954 he state(6) to be set 00:27:20.954 starting I/O failed: -6 00:27:20.954 [2024-09-30 22:55:47.482607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with the state(6) to be set 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 [2024-09-30 22:55:47.482614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with the state(6) to be set 00:27:20.954 starting I/O failed: -6 00:27:20.954 [2024-09-30 22:55:47.482619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with the state(6) to be set 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 [2024-09-30 22:55:47.482629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e00 is same with the state(6) to be set 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 [2024-09-30 22:55:47.482882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.482957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc822d0 is same with the state(6) to be set 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 [2024-09-30 22:55:47.483138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc827a0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc827a0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc827a0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc827a0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc827a0 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.954 NVMe io qpair process completion error 00:27:20.954 [2024-09-30 22:55:47.483363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81930 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81930 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81930 is same with the state(6) to be set 00:27:20.954 [2024-09-30 22:55:47.483394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81930 is same with the state(6) to be set 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 [2024-09-30 22:55:47.484395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 starting I/O failed: -6 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.954 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 [2024-09-30 22:55:47.485216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.955 Write completed with error (sct=0, sc=8) 00:27:20.955 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 [2024-09-30 22:55:47.488709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.956 NVMe io qpair process completion error 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 [2024-09-30 22:55:47.490078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 [2024-09-30 22:55:47.490917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 starting I/O failed: -6 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.956 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 [2024-09-30 22:55:47.491848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 [2024-09-30 22:55:47.495855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.957 NVMe io qpair process completion error 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 starting I/O failed: -6 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.957 Write completed with error (sct=0, sc=8) 00:27:20.958 [2024-09-30 22:55:47.497190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.958 starting I/O failed: -6 00:27:20.958 starting I/O failed: -6 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 [2024-09-30 22:55:47.498132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 [2024-09-30 22:55:47.499038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.958 starting I/O failed: -6 00:27:20.958 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 [2024-09-30 22:55:47.500507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.959 NVMe io qpair process completion error 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 [2024-09-30 22:55:47.501711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 [2024-09-30 22:55:47.502618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.959 starting I/O failed: -6 00:27:20.959 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 [2024-09-30 22:55:47.503537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 [2024-09-30 22:55:47.505244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.960 NVMe io qpair process completion error 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 [2024-09-30 22:55:47.506515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.960 Write completed with error (sct=0, sc=8) 00:27:20.960 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 [2024-09-30 22:55:47.507346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 [2024-09-30 22:55:47.508293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.961 Write completed with error (sct=0, sc=8) 00:27:20.961 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 [2024-09-30 22:55:47.510903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.962 NVMe io qpair process completion error 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 [2024-09-30 22:55:47.512084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 Write completed with error (sct=0, sc=8) 00:27:20.962 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 [2024-09-30 22:55:47.512915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 [2024-09-30 22:55:47.513841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.963 starting I/O failed: -6 00:27:20.963 [2024-09-30 22:55:47.515490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.963 NVMe io qpair process completion error 00:27:20.963 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 [2024-09-30 22:55:47.516639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.964 starting I/O failed: -6 00:27:20.964 starting I/O failed: -6 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 [2024-09-30 22:55:47.517595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 [2024-09-30 22:55:47.518511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.964 starting I/O failed: -6 00:27:20.964 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 [2024-09-30 22:55:47.521118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.965 NVMe io qpair process completion error 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 [2024-09-30 22:55:47.522191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 [2024-09-30 22:55:47.523048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.965 Write completed with error (sct=0, sc=8) 00:27:20.965 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 [2024-09-30 22:55:47.524011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 [2024-09-30 22:55:47.525660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.966 NVMe io qpair process completion error 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 starting I/O failed: -6 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.966 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 [2024-09-30 22:55:47.526806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 [2024-09-30 22:55:47.527698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.967 starting I/O failed: -6 00:27:20.967 Write completed with error (sct=0, sc=8) 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 [2024-09-30 22:55:47.528621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 Write completed with error (sct=0, sc=8) 00:27:20.968 starting I/O failed: -6 00:27:20.968 [2024-09-30 22:55:47.531091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.968 NVMe io qpair process completion error 00:27:20.968 Initializing NVMe Controllers 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:20.968 Controller IO queue size 128, less than required. 00:27:20.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:20.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:20.968 Initialization complete. Launching workers. 00:27:20.968 ======================================================== 00:27:20.968 Latency(us) 00:27:20.968 Device Information : IOPS MiB/s Average min max 00:27:20.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1881.18 80.83 68063.18 719.96 124619.20 00:27:20.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1845.54 79.30 69398.34 932.82 126555.69 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1864.01 80.09 68737.39 705.96 146778.81 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1865.10 80.14 68732.43 776.91 145991.76 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1888.79 81.16 67893.71 680.22 121714.45 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1896.61 81.49 67649.25 635.34 134752.39 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1858.15 79.84 69071.97 907.69 120635.28 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1882.92 80.91 67427.43 599.49 120689.56 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1833.37 78.78 69280.45 672.25 121045.52 00:27:20.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1867.92 80.26 68029.03 693.96 120814.79 00:27:20.969 ======================================================== 00:27:20.969 Total : 18683.60 802.81 68422.28 599.49 146778.81 00:27:20.969 00:27:20.969 [2024-09-30 22:55:47.536353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa78e0 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa75b0 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa9760 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa7280 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8810 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8b40 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa84e0 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa9430 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa7fd0 is same with the state(6) to be set 00:27:20.969 [2024-09-30 22:55:47.536628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa81b0 is same with the state(6) to be set 00:27:20.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:20.969 22:55:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 782264 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 782264 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 782264 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.912 rmmod nvme_tcp 00:27:21.912 rmmod nvme_fabrics 00:27:21.912 rmmod nvme_keyring 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 782039 ']' 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 782039 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 782039 ']' 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 782039 00:27:21.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (782039) - No such process 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 782039 is not found' 00:27:21.912 Process with pid 782039 is not found 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:27:21.912 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.913 22:55:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.981 00:27:23.981 real 0m10.331s 00:27:23.981 user 0m27.885s 00:27:23.981 sys 0m3.953s 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:23.981 ************************************ 00:27:23.981 END TEST nvmf_shutdown_tc4 00:27:23.981 ************************************ 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:23.981 00:27:23.981 real 0m43.657s 00:27:23.981 user 1m44.205s 00:27:23.981 sys 0m13.991s 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:23.981 ************************************ 00:27:23.981 END TEST nvmf_shutdown 00:27:23.981 ************************************ 00:27:23.981 22:55:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:23.981 00:27:23.981 real 12m54.172s 00:27:23.981 user 27m5.837s 00:27:23.981 sys 3m50.643s 00:27:24.241 22:55:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.241 22:55:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:24.241 ************************************ 00:27:24.241 END TEST nvmf_target_extra 00:27:24.241 ************************************ 00:27:24.241 22:55:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:24.241 22:55:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:24.241 22:55:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.241 22:55:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.241 ************************************ 00:27:24.241 START TEST nvmf_host 00:27:24.241 ************************************ 00:27:24.241 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:24.241 * Looking for test storage... 00:27:24.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:24.241 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:24.241 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:24.241 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:24.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.502 --rc genhtml_branch_coverage=1 00:27:24.502 --rc genhtml_function_coverage=1 00:27:24.502 --rc genhtml_legend=1 00:27:24.502 --rc geninfo_all_blocks=1 00:27:24.502 --rc geninfo_unexecuted_blocks=1 00:27:24.502 00:27:24.502 ' 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:24.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.502 --rc genhtml_branch_coverage=1 00:27:24.502 --rc genhtml_function_coverage=1 00:27:24.502 --rc genhtml_legend=1 00:27:24.502 --rc geninfo_all_blocks=1 00:27:24.502 --rc geninfo_unexecuted_blocks=1 00:27:24.502 00:27:24.502 ' 00:27:24.502 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:24.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.502 --rc genhtml_branch_coverage=1 00:27:24.502 --rc genhtml_function_coverage=1 00:27:24.502 --rc genhtml_legend=1 00:27:24.502 --rc geninfo_all_blocks=1 00:27:24.502 --rc geninfo_unexecuted_blocks=1 00:27:24.502 00:27:24.502 ' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:24.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.503 --rc genhtml_branch_coverage=1 00:27:24.503 --rc genhtml_function_coverage=1 00:27:24.503 --rc genhtml_legend=1 00:27:24.503 --rc geninfo_all_blocks=1 00:27:24.503 --rc geninfo_unexecuted_blocks=1 00:27:24.503 00:27:24.503 ' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 ************************************ 00:27:24.503 START TEST nvmf_multicontroller 00:27:24.503 ************************************ 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:24.503 * Looking for test storage... 00:27:24.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:27:24.503 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:24.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.764 --rc genhtml_branch_coverage=1 00:27:24.764 --rc genhtml_function_coverage=1 00:27:24.764 --rc genhtml_legend=1 00:27:24.764 --rc geninfo_all_blocks=1 00:27:24.764 --rc geninfo_unexecuted_blocks=1 00:27:24.764 00:27:24.764 ' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:24.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.764 --rc genhtml_branch_coverage=1 00:27:24.764 --rc genhtml_function_coverage=1 00:27:24.764 --rc genhtml_legend=1 00:27:24.764 --rc geninfo_all_blocks=1 00:27:24.764 --rc geninfo_unexecuted_blocks=1 00:27:24.764 00:27:24.764 ' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:24.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.764 --rc genhtml_branch_coverage=1 00:27:24.764 --rc genhtml_function_coverage=1 00:27:24.764 --rc genhtml_legend=1 00:27:24.764 --rc geninfo_all_blocks=1 00:27:24.764 --rc geninfo_unexecuted_blocks=1 00:27:24.764 00:27:24.764 ' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:24.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.764 --rc genhtml_branch_coverage=1 00:27:24.764 --rc genhtml_function_coverage=1 00:27:24.764 --rc genhtml_legend=1 00:27:24.764 --rc geninfo_all_blocks=1 00:27:24.764 --rc geninfo_unexecuted_blocks=1 00:27:24.764 00:27:24.764 ' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.764 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.765 22:55:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:32.904 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:32.904 22:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:32.904 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:32.904 Found net devices under 0000:31:00.0: cvl_0_0 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:32.904 Found net devices under 0000:31:00.1: cvl_0_1 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.904 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:27:32.905 00:27:32.905 --- 10.0.0.2 ping statistics --- 00:27:32.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.905 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:27:32.905 00:27:32.905 --- 10.0.0.1 ping statistics --- 00:27:32.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.905 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=788057 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 788057 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 788057 ']' 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.905 22:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:32.905 [2024-09-30 22:55:59.449756] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:32.905 [2024-09-30 22:55:59.449819] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.905 [2024-09-30 22:55:59.543570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.905 [2024-09-30 22:55:59.636980] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.905 [2024-09-30 22:55:59.637043] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.905 [2024-09-30 22:55:59.637052] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.905 [2024-09-30 22:55:59.637060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.905 [2024-09-30 22:55:59.637067] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.905 [2024-09-30 22:55:59.637237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.905 [2024-09-30 22:55:59.637398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.905 [2024-09-30 22:55:59.637398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.477 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.477 [2024-09-30 22:56:00.331921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 Malloc0 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 [2024-09-30 22:56:00.407707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 [2024-09-30 22:56:00.419555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 Malloc1 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=788234 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 788234 /var/tmp/bdevperf.sock 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 788234 ']' 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:33.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.740 22:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 NVMe0n1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.682 1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 request: 00:27:34.682 { 00:27:34.682 "name": "NVMe0", 00:27:34.682 "trtype": "tcp", 00:27:34.682 "traddr": "10.0.0.2", 00:27:34.682 "adrfam": "ipv4", 00:27:34.682 "trsvcid": "4420", 00:27:34.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.682 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:34.682 "hostaddr": "10.0.0.1", 00:27:34.682 "prchk_reftag": false, 00:27:34.682 "prchk_guard": false, 00:27:34.682 "hdgst": false, 00:27:34.682 "ddgst": false, 00:27:34.682 "allow_unrecognized_csi": false, 00:27:34.682 "method": "bdev_nvme_attach_controller", 00:27:34.682 "req_id": 1 00:27:34.682 } 00:27:34.682 Got JSON-RPC error response 00:27:34.682 response: 00:27:34.682 { 00:27:34.682 "code": -114, 00:27:34.682 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:34.682 } 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 request: 00:27:34.682 { 00:27:34.682 "name": "NVMe0", 00:27:34.682 "trtype": "tcp", 00:27:34.682 "traddr": "10.0.0.2", 00:27:34.682 "adrfam": "ipv4", 00:27:34.682 "trsvcid": "4420", 00:27:34.682 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:34.682 "hostaddr": "10.0.0.1", 00:27:34.682 "prchk_reftag": false, 00:27:34.682 "prchk_guard": false, 00:27:34.682 "hdgst": false, 00:27:34.682 "ddgst": false, 00:27:34.682 "allow_unrecognized_csi": false, 00:27:34.682 "method": "bdev_nvme_attach_controller", 00:27:34.682 "req_id": 1 00:27:34.682 } 00:27:34.682 Got JSON-RPC error response 00:27:34.682 response: 00:27:34.682 { 00:27:34.682 "code": -114, 00:27:34.682 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:34.682 } 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 request: 00:27:34.682 { 00:27:34.682 "name": "NVMe0", 00:27:34.682 "trtype": "tcp", 00:27:34.682 "traddr": "10.0.0.2", 00:27:34.682 "adrfam": "ipv4", 00:27:34.682 "trsvcid": "4420", 00:27:34.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.682 "hostaddr": "10.0.0.1", 00:27:34.682 "prchk_reftag": false, 00:27:34.682 "prchk_guard": false, 00:27:34.682 "hdgst": false, 00:27:34.682 "ddgst": false, 00:27:34.682 "multipath": "disable", 00:27:34.682 "allow_unrecognized_csi": false, 00:27:34.682 "method": "bdev_nvme_attach_controller", 00:27:34.682 "req_id": 1 00:27:34.682 } 00:27:34.682 Got JSON-RPC error response 00:27:34.682 response: 00:27:34.682 { 00:27:34.682 "code": -114, 00:27:34.682 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:34.682 } 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.682 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.683 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.683 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:34.683 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:34.683 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:34.683 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.943 request: 00:27:34.943 { 00:27:34.943 "name": "NVMe0", 00:27:34.943 "trtype": "tcp", 00:27:34.943 "traddr": "10.0.0.2", 00:27:34.943 "adrfam": "ipv4", 00:27:34.943 "trsvcid": "4420", 00:27:34.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.943 "hostaddr": "10.0.0.1", 00:27:34.943 "prchk_reftag": false, 00:27:34.943 "prchk_guard": false, 00:27:34.943 "hdgst": false, 00:27:34.943 "ddgst": false, 00:27:34.943 "multipath": "failover", 00:27:34.943 "allow_unrecognized_csi": false, 00:27:34.943 "method": "bdev_nvme_attach_controller", 00:27:34.943 "req_id": 1 00:27:34.943 } 00:27:34.943 Got JSON-RPC error response 00:27:34.943 response: 00:27:34.943 { 00:27:34.943 "code": -114, 00:27:34.943 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:34.943 } 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.943 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.943 22:56:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.204 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:35.204 22:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:36.145 { 00:27:36.145 "results": [ 00:27:36.145 { 00:27:36.145 "job": "NVMe0n1", 00:27:36.145 "core_mask": "0x1", 00:27:36.145 "workload": "write", 00:27:36.145 "status": "finished", 00:27:36.145 "queue_depth": 128, 00:27:36.145 "io_size": 4096, 00:27:36.145 "runtime": 1.003004, 00:27:36.145 "iops": 28815.438422977375, 00:27:36.145 "mibps": 112.56030633975537, 00:27:36.145 "io_failed": 0, 00:27:36.145 "io_timeout": 0, 00:27:36.145 "avg_latency_us": 4434.906373261367, 00:27:36.145 "min_latency_us": 2075.306666666667, 00:27:36.145 "max_latency_us": 13598.72 00:27:36.145 } 00:27:36.145 ], 00:27:36.145 "core_count": 1 00:27:36.145 } 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 788234 ']' 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788234' 00:27:36.405 killing process with pid 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 788234 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.405 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:36.665 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:36.665 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:36.665 [2024-09-30 22:56:00.551289] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:36.665 [2024-09-30 22:56:00.551366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788234 ] 00:27:36.665 [2024-09-30 22:56:00.633389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.665 [2024-09-30 22:56:00.731879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.665 [2024-09-30 22:56:02.035511] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 87926615-9ce4-4630-8886-2fc06eb04bcb already exists 00:27:36.666 [2024-09-30 22:56:02.035540] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:87926615-9ce4-4630-8886-2fc06eb04bcb alias for bdev NVMe1n1 00:27:36.666 [2024-09-30 22:56:02.035548] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:36.666 Running I/O for 1 seconds... 00:27:36.666 28774.00 IOPS, 112.40 MiB/s 00:27:36.666 Latency(us) 00:27:36.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.666 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:36.666 NVMe0n1 : 1.00 28815.44 112.56 0.00 0.00 4434.91 2075.31 13598.72 00:27:36.666 =================================================================================================================== 00:27:36.666 Total : 28815.44 112.56 0.00 0.00 4434.91 2075.31 13598.72 00:27:36.666 Received shutdown signal, test time was about 1.000000 seconds 00:27:36.666 00:27:36.666 Latency(us) 00:27:36.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.666 =================================================================================================================== 00:27:36.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.666 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.666 rmmod nvme_tcp 00:27:36.666 rmmod nvme_fabrics 00:27:36.666 rmmod nvme_keyring 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 788057 ']' 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 788057 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 788057 ']' 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 788057 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788057 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788057' 00:27:36.666 killing process with pid 788057 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 788057 00:27:36.666 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 788057 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.926 22:56:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.836 22:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.836 00:27:38.836 real 0m14.448s 00:27:38.837 user 0m17.574s 00:27:38.837 sys 0m6.830s 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.837 ************************************ 00:27:38.837 END TEST nvmf_multicontroller 00:27:38.837 ************************************ 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.837 22:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 ************************************ 00:27:39.098 START TEST nvmf_aer 00:27:39.098 ************************************ 00:27:39.098 22:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:39.098 * Looking for test storage... 00:27:39.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.098 22:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:39.098 22:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:27:39.098 22:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.098 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:39.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.099 --rc genhtml_branch_coverage=1 00:27:39.099 --rc genhtml_function_coverage=1 00:27:39.099 --rc genhtml_legend=1 00:27:39.099 --rc geninfo_all_blocks=1 00:27:39.099 --rc geninfo_unexecuted_blocks=1 00:27:39.099 00:27:39.099 ' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:39.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.099 --rc genhtml_branch_coverage=1 00:27:39.099 --rc genhtml_function_coverage=1 00:27:39.099 --rc genhtml_legend=1 00:27:39.099 --rc geninfo_all_blocks=1 00:27:39.099 --rc geninfo_unexecuted_blocks=1 00:27:39.099 00:27:39.099 ' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:39.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.099 --rc genhtml_branch_coverage=1 00:27:39.099 --rc genhtml_function_coverage=1 00:27:39.099 --rc genhtml_legend=1 00:27:39.099 --rc geninfo_all_blocks=1 00:27:39.099 --rc geninfo_unexecuted_blocks=1 00:27:39.099 00:27:39.099 ' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:39.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.099 --rc genhtml_branch_coverage=1 00:27:39.099 --rc genhtml_function_coverage=1 00:27:39.099 --rc genhtml_legend=1 00:27:39.099 --rc geninfo_all_blocks=1 00:27:39.099 --rc geninfo_unexecuted_blocks=1 00:27:39.099 00:27:39.099 ' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.099 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.359 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.359 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.359 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.359 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.359 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.360 22:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.501 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:47.502 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:47.502 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:47.502 Found net devices under 0000:31:00.0: cvl_0_0 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:47.502 Found net devices under 0000:31:00.1: cvl_0_1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:27:47.502 00:27:47.502 --- 10.0.0.2 ping statistics --- 00:27:47.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.502 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:27:47.502 00:27:47.502 --- 10.0.0.1 ping statistics --- 00:27:47.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.502 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.502 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=793159 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 793159 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 793159 ']' 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:47.503 22:56:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.503 [2024-09-30 22:56:13.906413] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:47.503 [2024-09-30 22:56:13.906479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.503 [2024-09-30 22:56:14.000495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.503 [2024-09-30 22:56:14.096347] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.503 [2024-09-30 22:56:14.096407] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.503 [2024-09-30 22:56:14.096417] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.503 [2024-09-30 22:56:14.096424] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.503 [2024-09-30 22:56:14.096430] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.503 [2024-09-30 22:56:14.096625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.503 [2024-09-30 22:56:14.096786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.503 [2024-09-30 22:56:14.096951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.503 [2024-09-30 22:56:14.096997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.764 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 [2024-09-30 22:56:14.786551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 Malloc0 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 [2024-09-30 22:56:14.852324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.026 [ 00:27:48.026 { 00:27:48.026 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:48.026 "subtype": "Discovery", 00:27:48.026 "listen_addresses": [], 00:27:48.026 "allow_any_host": true, 00:27:48.026 "hosts": [] 00:27:48.026 }, 00:27:48.026 { 00:27:48.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.026 "subtype": "NVMe", 00:27:48.026 "listen_addresses": [ 00:27:48.026 { 00:27:48.026 "trtype": "TCP", 00:27:48.026 "adrfam": "IPv4", 00:27:48.026 "traddr": "10.0.0.2", 00:27:48.026 "trsvcid": "4420" 00:27:48.026 } 00:27:48.026 ], 00:27:48.026 "allow_any_host": true, 00:27:48.026 "hosts": [], 00:27:48.026 "serial_number": "SPDK00000000000001", 00:27:48.026 "model_number": "SPDK bdev Controller", 00:27:48.026 "max_namespaces": 2, 00:27:48.026 "min_cntlid": 1, 00:27:48.026 "max_cntlid": 65519, 00:27:48.026 "namespaces": [ 00:27:48.026 { 00:27:48.026 "nsid": 1, 00:27:48.026 "bdev_name": "Malloc0", 00:27:48.026 "name": "Malloc0", 00:27:48.026 "nguid": "9284BAE5EE9E47AF8697B96E17BFE38A", 00:27:48.026 "uuid": "9284bae5-ee9e-47af-8697-b96e17bfe38a" 00:27:48.026 } 00:27:48.026 ] 00:27:48.026 } 00:27:48.026 ] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=793355 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:48.026 22:56:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 Malloc1 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 Asynchronous Event Request test 00:27:48.288 Attaching to 10.0.0.2 00:27:48.288 Attached to 10.0.0.2 00:27:48.288 Registering asynchronous event callbacks... 00:27:48.288 Starting namespace attribute notice tests for all controllers... 00:27:48.288 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:48.288 aer_cb - Changed Namespace 00:27:48.288 Cleaning up... 00:27:48.288 [ 00:27:48.288 { 00:27:48.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:48.288 "subtype": "Discovery", 00:27:48.288 "listen_addresses": [], 00:27:48.288 "allow_any_host": true, 00:27:48.288 "hosts": [] 00:27:48.288 }, 00:27:48.288 { 00:27:48.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.288 "subtype": "NVMe", 00:27:48.288 "listen_addresses": [ 00:27:48.288 { 00:27:48.288 "trtype": "TCP", 00:27:48.288 "adrfam": "IPv4", 00:27:48.288 "traddr": "10.0.0.2", 00:27:48.288 "trsvcid": "4420" 00:27:48.288 } 00:27:48.288 ], 00:27:48.288 "allow_any_host": true, 00:27:48.288 "hosts": [], 00:27:48.288 "serial_number": "SPDK00000000000001", 00:27:48.288 "model_number": "SPDK bdev Controller", 00:27:48.288 "max_namespaces": 2, 00:27:48.288 "min_cntlid": 1, 00:27:48.288 "max_cntlid": 65519, 00:27:48.288 "namespaces": [ 00:27:48.288 { 00:27:48.288 "nsid": 1, 00:27:48.288 "bdev_name": "Malloc0", 00:27:48.288 "name": "Malloc0", 00:27:48.288 "nguid": "9284BAE5EE9E47AF8697B96E17BFE38A", 00:27:48.288 "uuid": "9284bae5-ee9e-47af-8697-b96e17bfe38a" 00:27:48.288 }, 00:27:48.288 { 00:27:48.288 "nsid": 2, 00:27:48.288 "bdev_name": "Malloc1", 00:27:48.288 "name": "Malloc1", 00:27:48.288 "nguid": "E038A2F880D5472690AD0458D364AE25", 00:27:48.288 "uuid": "e038a2f8-80d5-4726-90ad-0458d364ae25" 00:27:48.288 } 00:27:48.288 ] 00:27:48.288 } 00:27:48.288 ] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 793355 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:48.288 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.289 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.289 rmmod nvme_tcp 00:27:48.289 rmmod nvme_fabrics 00:27:48.289 rmmod nvme_keyring 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 793159 ']' 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 793159 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 793159 ']' 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 793159 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793159 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793159' 00:27:48.550 killing process with pid 793159 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 793159 00:27:48.550 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 793159 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.811 22:56:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.727 00:27:50.727 real 0m11.797s 00:27:50.727 user 0m8.229s 00:27:50.727 sys 0m6.317s 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:50.727 ************************************ 00:27:50.727 END TEST nvmf_aer 00:27:50.727 ************************************ 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:50.727 22:56:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.989 ************************************ 00:27:50.989 START TEST nvmf_async_init 00:27:50.989 ************************************ 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.989 * Looking for test storage... 00:27:50.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:50.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.989 --rc genhtml_branch_coverage=1 00:27:50.989 --rc genhtml_function_coverage=1 00:27:50.989 --rc genhtml_legend=1 00:27:50.989 --rc geninfo_all_blocks=1 00:27:50.989 --rc geninfo_unexecuted_blocks=1 00:27:50.989 00:27:50.989 ' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:50.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.989 --rc genhtml_branch_coverage=1 00:27:50.989 --rc genhtml_function_coverage=1 00:27:50.989 --rc genhtml_legend=1 00:27:50.989 --rc geninfo_all_blocks=1 00:27:50.989 --rc geninfo_unexecuted_blocks=1 00:27:50.989 00:27:50.989 ' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:50.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.989 --rc genhtml_branch_coverage=1 00:27:50.989 --rc genhtml_function_coverage=1 00:27:50.989 --rc genhtml_legend=1 00:27:50.989 --rc geninfo_all_blocks=1 00:27:50.989 --rc geninfo_unexecuted_blocks=1 00:27:50.989 00:27:50.989 ' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:50.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.989 --rc genhtml_branch_coverage=1 00:27:50.989 --rc genhtml_function_coverage=1 00:27:50.989 --rc genhtml_legend=1 00:27:50.989 --rc geninfo_all_blocks=1 00:27:50.989 --rc geninfo_unexecuted_blocks=1 00:27:50.989 00:27:50.989 ' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.989 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:50.990 22:56:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=384077c68ac847a28d613b3daec38645 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.251 22:56:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.399 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.399 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.399 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:59.400 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:59.400 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:59.400 Found net devices under 0000:31:00.0: cvl_0_0 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:59.400 Found net devices under 0000:31:00.1: cvl_0_1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.400 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:27:59.401 00:27:59.401 --- 10.0.0.2 ping statistics --- 00:27:59.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.401 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:59.401 00:27:59.401 --- 10.0.0.1 ping statistics --- 00:27:59.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.401 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=797743 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 797743 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 797743 ']' 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.401 22:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.401 [2024-09-30 22:56:25.778342] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:27:59.401 [2024-09-30 22:56:25.778414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.401 [2024-09-30 22:56:25.869720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.401 [2024-09-30 22:56:25.965948] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.401 [2024-09-30 22:56:25.966008] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.401 [2024-09-30 22:56:25.966023] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.401 [2024-09-30 22:56:25.966031] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.401 [2024-09-30 22:56:25.966037] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.401 [2024-09-30 22:56:25.966063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.662 [2024-09-30 22:56:26.640451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.662 null0 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.662 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.922 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 384077c68ac847a28d613b3daec38645 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.923 [2024-09-30 22:56:26.700819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.923 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.183 nvme0n1 00:28:00.183 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.183 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.183 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.183 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.183 [ 00:28:00.183 { 00:28:00.183 "name": "nvme0n1", 00:28:00.183 "aliases": [ 00:28:00.183 "384077c6-8ac8-47a2-8d61-3b3daec38645" 00:28:00.183 ], 00:28:00.183 "product_name": "NVMe disk", 00:28:00.183 "block_size": 512, 00:28:00.183 "num_blocks": 2097152, 00:28:00.183 "uuid": "384077c6-8ac8-47a2-8d61-3b3daec38645", 00:28:00.183 "numa_id": 0, 00:28:00.183 "assigned_rate_limits": { 00:28:00.183 "rw_ios_per_sec": 0, 00:28:00.183 "rw_mbytes_per_sec": 0, 00:28:00.183 "r_mbytes_per_sec": 0, 00:28:00.183 "w_mbytes_per_sec": 0 00:28:00.183 }, 00:28:00.183 "claimed": false, 00:28:00.183 "zoned": false, 00:28:00.183 "supported_io_types": { 00:28:00.183 "read": true, 00:28:00.183 "write": true, 00:28:00.183 "unmap": false, 00:28:00.183 "flush": true, 00:28:00.183 "reset": true, 00:28:00.183 "nvme_admin": true, 00:28:00.183 "nvme_io": true, 00:28:00.183 "nvme_io_md": false, 00:28:00.183 "write_zeroes": true, 00:28:00.183 "zcopy": false, 00:28:00.183 "get_zone_info": false, 00:28:00.183 "zone_management": false, 00:28:00.183 "zone_append": false, 00:28:00.183 "compare": true, 00:28:00.183 "compare_and_write": true, 00:28:00.183 "abort": true, 00:28:00.183 "seek_hole": false, 00:28:00.183 "seek_data": false, 00:28:00.183 "copy": true, 00:28:00.183 "nvme_iov_md": false 00:28:00.183 }, 00:28:00.183 "memory_domains": [ 00:28:00.183 { 00:28:00.183 "dma_device_id": "system", 00:28:00.183 "dma_device_type": 1 00:28:00.183 } 00:28:00.183 ], 00:28:00.183 "driver_specific": { 00:28:00.183 "nvme": [ 00:28:00.183 { 00:28:00.183 "trid": { 00:28:00.183 "trtype": "TCP", 00:28:00.183 "adrfam": "IPv4", 00:28:00.183 "traddr": "10.0.0.2", 00:28:00.183 "trsvcid": "4420", 00:28:00.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.183 }, 00:28:00.183 "ctrlr_data": { 00:28:00.183 "cntlid": 1, 00:28:00.183 "vendor_id": "0x8086", 00:28:00.183 "model_number": "SPDK bdev Controller", 00:28:00.183 "serial_number": "00000000000000000000", 00:28:00.183 "firmware_revision": "25.01", 00:28:00.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.183 "oacs": { 00:28:00.183 "security": 0, 00:28:00.183 "format": 0, 00:28:00.183 "firmware": 0, 00:28:00.183 "ns_manage": 0 00:28:00.183 }, 00:28:00.183 "multi_ctrlr": true, 00:28:00.183 "ana_reporting": false 00:28:00.183 }, 00:28:00.183 "vs": { 00:28:00.183 "nvme_version": "1.3" 00:28:00.183 }, 00:28:00.183 "ns_data": { 00:28:00.183 "id": 1, 00:28:00.183 "can_share": true 00:28:00.183 } 00:28:00.183 } 00:28:00.183 ], 00:28:00.183 "mp_policy": "active_passive" 00:28:00.183 } 00:28:00.183 } 00:28:00.183 ] 00:28:00.183 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:00.184 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.184 [2024-09-30 22:56:26.977499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:00.184 [2024-09-30 22:56:26.977586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15263e0 (9): Bad file descriptor 00:28:00.184 [2024-09-30 22:56:27.110008] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.184 [ 00:28:00.184 { 00:28:00.184 "name": "nvme0n1", 00:28:00.184 "aliases": [ 00:28:00.184 "384077c6-8ac8-47a2-8d61-3b3daec38645" 00:28:00.184 ], 00:28:00.184 "product_name": "NVMe disk", 00:28:00.184 "block_size": 512, 00:28:00.184 "num_blocks": 2097152, 00:28:00.184 "uuid": "384077c6-8ac8-47a2-8d61-3b3daec38645", 00:28:00.184 "numa_id": 0, 00:28:00.184 "assigned_rate_limits": { 00:28:00.184 "rw_ios_per_sec": 0, 00:28:00.184 "rw_mbytes_per_sec": 0, 00:28:00.184 "r_mbytes_per_sec": 0, 00:28:00.184 "w_mbytes_per_sec": 0 00:28:00.184 }, 00:28:00.184 "claimed": false, 00:28:00.184 "zoned": false, 00:28:00.184 "supported_io_types": { 00:28:00.184 "read": true, 00:28:00.184 "write": true, 00:28:00.184 "unmap": false, 00:28:00.184 "flush": true, 00:28:00.184 "reset": true, 00:28:00.184 "nvme_admin": true, 00:28:00.184 "nvme_io": true, 00:28:00.184 "nvme_io_md": false, 00:28:00.184 "write_zeroes": true, 00:28:00.184 "zcopy": false, 00:28:00.184 "get_zone_info": false, 00:28:00.184 "zone_management": false, 00:28:00.184 "zone_append": false, 00:28:00.184 "compare": true, 00:28:00.184 "compare_and_write": true, 00:28:00.184 "abort": true, 00:28:00.184 "seek_hole": false, 00:28:00.184 "seek_data": false, 00:28:00.184 "copy": true, 00:28:00.184 "nvme_iov_md": false 00:28:00.184 }, 00:28:00.184 "memory_domains": [ 00:28:00.184 { 00:28:00.184 "dma_device_id": "system", 00:28:00.184 "dma_device_type": 1 00:28:00.184 } 00:28:00.184 ], 00:28:00.184 "driver_specific": { 00:28:00.184 "nvme": [ 00:28:00.184 { 00:28:00.184 "trid": { 00:28:00.184 "trtype": "TCP", 00:28:00.184 "adrfam": "IPv4", 00:28:00.184 "traddr": "10.0.0.2", 00:28:00.184 "trsvcid": "4420", 00:28:00.184 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.184 }, 00:28:00.184 "ctrlr_data": { 00:28:00.184 "cntlid": 2, 00:28:00.184 "vendor_id": "0x8086", 00:28:00.184 "model_number": "SPDK bdev Controller", 00:28:00.184 "serial_number": "00000000000000000000", 00:28:00.184 "firmware_revision": "25.01", 00:28:00.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.184 "oacs": { 00:28:00.184 "security": 0, 00:28:00.184 "format": 0, 00:28:00.184 "firmware": 0, 00:28:00.184 "ns_manage": 0 00:28:00.184 }, 00:28:00.184 "multi_ctrlr": true, 00:28:00.184 "ana_reporting": false 00:28:00.184 }, 00:28:00.184 "vs": { 00:28:00.184 "nvme_version": "1.3" 00:28:00.184 }, 00:28:00.184 "ns_data": { 00:28:00.184 "id": 1, 00:28:00.184 "can_share": true 00:28:00.184 } 00:28:00.184 } 00:28:00.184 ], 00:28:00.184 "mp_policy": "active_passive" 00:28:00.184 } 00:28:00.184 } 00:28:00.184 ] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wlJwwAvBJe 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wlJwwAvBJe 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.wlJwwAvBJe 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.184 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.445 [2024-09-30 22:56:27.202180] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:00.445 [2024-09-30 22:56:27.202361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.445 [2024-09-30 22:56:27.226257] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:00.445 nvme0n1 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.445 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.445 [ 00:28:00.445 { 00:28:00.445 "name": "nvme0n1", 00:28:00.445 "aliases": [ 00:28:00.446 "384077c6-8ac8-47a2-8d61-3b3daec38645" 00:28:00.446 ], 00:28:00.446 "product_name": "NVMe disk", 00:28:00.446 "block_size": 512, 00:28:00.446 "num_blocks": 2097152, 00:28:00.446 "uuid": "384077c6-8ac8-47a2-8d61-3b3daec38645", 00:28:00.446 "numa_id": 0, 00:28:00.446 "assigned_rate_limits": { 00:28:00.446 "rw_ios_per_sec": 0, 00:28:00.446 "rw_mbytes_per_sec": 0, 00:28:00.446 "r_mbytes_per_sec": 0, 00:28:00.446 "w_mbytes_per_sec": 0 00:28:00.446 }, 00:28:00.446 "claimed": false, 00:28:00.446 "zoned": false, 00:28:00.446 "supported_io_types": { 00:28:00.446 "read": true, 00:28:00.446 "write": true, 00:28:00.446 "unmap": false, 00:28:00.446 "flush": true, 00:28:00.446 "reset": true, 00:28:00.446 "nvme_admin": true, 00:28:00.446 "nvme_io": true, 00:28:00.446 "nvme_io_md": false, 00:28:00.446 "write_zeroes": true, 00:28:00.446 "zcopy": false, 00:28:00.446 "get_zone_info": false, 00:28:00.446 "zone_management": false, 00:28:00.446 "zone_append": false, 00:28:00.446 "compare": true, 00:28:00.446 "compare_and_write": true, 00:28:00.446 "abort": true, 00:28:00.446 "seek_hole": false, 00:28:00.446 "seek_data": false, 00:28:00.446 "copy": true, 00:28:00.446 "nvme_iov_md": false 00:28:00.446 }, 00:28:00.446 "memory_domains": [ 00:28:00.446 { 00:28:00.446 "dma_device_id": "system", 00:28:00.446 "dma_device_type": 1 00:28:00.446 } 00:28:00.446 ], 00:28:00.446 "driver_specific": { 00:28:00.446 "nvme": [ 00:28:00.446 { 00:28:00.446 "trid": { 00:28:00.446 "trtype": "TCP", 00:28:00.446 "adrfam": "IPv4", 00:28:00.446 "traddr": "10.0.0.2", 00:28:00.446 "trsvcid": "4421", 00:28:00.446 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.446 }, 00:28:00.446 "ctrlr_data": { 00:28:00.446 "cntlid": 3, 00:28:00.446 "vendor_id": "0x8086", 00:28:00.446 "model_number": "SPDK bdev Controller", 00:28:00.446 "serial_number": "00000000000000000000", 00:28:00.446 "firmware_revision": "25.01", 00:28:00.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.446 "oacs": { 00:28:00.446 "security": 0, 00:28:00.446 "format": 0, 00:28:00.446 "firmware": 0, 00:28:00.446 "ns_manage": 0 00:28:00.446 }, 00:28:00.446 "multi_ctrlr": true, 00:28:00.446 "ana_reporting": false 00:28:00.446 }, 00:28:00.446 "vs": { 00:28:00.446 "nvme_version": "1.3" 00:28:00.446 }, 00:28:00.446 "ns_data": { 00:28:00.446 "id": 1, 00:28:00.446 "can_share": true 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ], 00:28:00.446 "mp_policy": "active_passive" 00:28:00.446 } 00:28:00.446 } 00:28:00.446 ] 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.wlJwwAvBJe 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.446 rmmod nvme_tcp 00:28:00.446 rmmod nvme_fabrics 00:28:00.446 rmmod nvme_keyring 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 797743 ']' 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 797743 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 797743 ']' 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 797743 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.446 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 797743 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 797743' 00:28:00.707 killing process with pid 797743 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 797743 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 797743 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.707 22:56:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.253 00:28:03.253 real 0m11.991s 00:28:03.253 user 0m4.324s 00:28:03.253 sys 0m6.226s 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:03.253 ************************************ 00:28:03.253 END TEST nvmf_async_init 00:28:03.253 ************************************ 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.253 ************************************ 00:28:03.253 START TEST dma 00:28:03.253 ************************************ 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:03.253 * Looking for test storage... 00:28:03.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:28:03.253 22:56:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:03.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.253 --rc genhtml_branch_coverage=1 00:28:03.253 --rc genhtml_function_coverage=1 00:28:03.253 --rc genhtml_legend=1 00:28:03.253 --rc geninfo_all_blocks=1 00:28:03.253 --rc geninfo_unexecuted_blocks=1 00:28:03.253 00:28:03.253 ' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:03.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.253 --rc genhtml_branch_coverage=1 00:28:03.253 --rc genhtml_function_coverage=1 00:28:03.253 --rc genhtml_legend=1 00:28:03.253 --rc geninfo_all_blocks=1 00:28:03.253 --rc geninfo_unexecuted_blocks=1 00:28:03.253 00:28:03.253 ' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:03.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.253 --rc genhtml_branch_coverage=1 00:28:03.253 --rc genhtml_function_coverage=1 00:28:03.253 --rc genhtml_legend=1 00:28:03.253 --rc geninfo_all_blocks=1 00:28:03.253 --rc geninfo_unexecuted_blocks=1 00:28:03.253 00:28:03.253 ' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:03.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.253 --rc genhtml_branch_coverage=1 00:28:03.253 --rc genhtml_function_coverage=1 00:28:03.253 --rc genhtml_legend=1 00:28:03.253 --rc geninfo_all_blocks=1 00:28:03.253 --rc geninfo_unexecuted_blocks=1 00:28:03.253 00:28:03.253 ' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.253 22:56:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:03.254 00:28:03.254 real 0m0.245s 00:28:03.254 user 0m0.135s 00:28:03.254 sys 0m0.125s 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:03.254 ************************************ 00:28:03.254 END TEST dma 00:28:03.254 ************************************ 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.254 ************************************ 00:28:03.254 START TEST nvmf_identify 00:28:03.254 ************************************ 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:03.254 * Looking for test storage... 00:28:03.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.254 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.514 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.515 --rc genhtml_branch_coverage=1 00:28:03.515 --rc genhtml_function_coverage=1 00:28:03.515 --rc genhtml_legend=1 00:28:03.515 --rc geninfo_all_blocks=1 00:28:03.515 --rc geninfo_unexecuted_blocks=1 00:28:03.515 00:28:03.515 ' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.515 --rc genhtml_branch_coverage=1 00:28:03.515 --rc genhtml_function_coverage=1 00:28:03.515 --rc genhtml_legend=1 00:28:03.515 --rc geninfo_all_blocks=1 00:28:03.515 --rc geninfo_unexecuted_blocks=1 00:28:03.515 00:28:03.515 ' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.515 --rc genhtml_branch_coverage=1 00:28:03.515 --rc genhtml_function_coverage=1 00:28:03.515 --rc genhtml_legend=1 00:28:03.515 --rc geninfo_all_blocks=1 00:28:03.515 --rc geninfo_unexecuted_blocks=1 00:28:03.515 00:28:03.515 ' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.515 --rc genhtml_branch_coverage=1 00:28:03.515 --rc genhtml_function_coverage=1 00:28:03.515 --rc genhtml_legend=1 00:28:03.515 --rc geninfo_all_blocks=1 00:28:03.515 --rc geninfo_unexecuted_blocks=1 00:28:03.515 00:28:03.515 ' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.515 22:56:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:11.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:11.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:11.656 Found net devices under 0000:31:00.0: cvl_0_0 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:11.656 Found net devices under 0000:31:00.1: cvl_0_1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.656 22:56:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:28:11.656 00:28:11.656 --- 10.0.0.2 ping statistics --- 00:28:11.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.656 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:28:11.656 00:28:11.656 --- 10.0.0.1 ping statistics --- 00:28:11.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.656 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:28:11.656 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=802440 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 802440 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 802440 ']' 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.657 22:56:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.657 [2024-09-30 22:56:38.202947] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:28:11.657 [2024-09-30 22:56:38.203014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.657 [2024-09-30 22:56:38.295101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.657 [2024-09-30 22:56:38.395030] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.657 [2024-09-30 22:56:38.395094] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.657 [2024-09-30 22:56:38.395102] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.657 [2024-09-30 22:56:38.395109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.657 [2024-09-30 22:56:38.395115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.657 [2024-09-30 22:56:38.395280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.657 [2024-09-30 22:56:38.395443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.657 [2024-09-30 22:56:38.395603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.657 [2024-09-30 22:56:38.395604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 [2024-09-30 22:56:39.040154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 Malloc0 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 [2024-09-30 22:56:39.149999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 [ 00:28:12.232 { 00:28:12.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:12.232 "subtype": "Discovery", 00:28:12.232 "listen_addresses": [ 00:28:12.232 { 00:28:12.232 "trtype": "TCP", 00:28:12.232 "adrfam": "IPv4", 00:28:12.232 "traddr": "10.0.0.2", 00:28:12.232 "trsvcid": "4420" 00:28:12.232 } 00:28:12.232 ], 00:28:12.232 "allow_any_host": true, 00:28:12.232 "hosts": [] 00:28:12.232 }, 00:28:12.232 { 00:28:12.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.232 "subtype": "NVMe", 00:28:12.232 "listen_addresses": [ 00:28:12.232 { 00:28:12.232 "trtype": "TCP", 00:28:12.232 "adrfam": "IPv4", 00:28:12.232 "traddr": "10.0.0.2", 00:28:12.232 "trsvcid": "4420" 00:28:12.232 } 00:28:12.232 ], 00:28:12.232 "allow_any_host": true, 00:28:12.232 "hosts": [], 00:28:12.232 "serial_number": "SPDK00000000000001", 00:28:12.232 "model_number": "SPDK bdev Controller", 00:28:12.232 "max_namespaces": 32, 00:28:12.232 "min_cntlid": 1, 00:28:12.232 "max_cntlid": 65519, 00:28:12.232 "namespaces": [ 00:28:12.232 { 00:28:12.232 "nsid": 1, 00:28:12.233 "bdev_name": "Malloc0", 00:28:12.233 "name": "Malloc0", 00:28:12.233 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:12.233 "eui64": "ABCDEF0123456789", 00:28:12.233 "uuid": "16d5e9d2-0fd9-4a8c-aa03-6616a390ec18" 00:28:12.233 } 00:28:12.233 ] 00:28:12.233 } 00:28:12.233 ] 00:28:12.233 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:12.233 [2024-09-30 22:56:39.213308] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:28:12.233 [2024-09-30 22:56:39.213358] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802734 ] 00:28:12.497 [2024-09-30 22:56:39.252106] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:12.497 [2024-09-30 22:56:39.252184] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:12.497 [2024-09-30 22:56:39.252190] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:12.497 [2024-09-30 22:56:39.252205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:12.497 [2024-09-30 22:56:39.252217] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:12.497 [2024-09-30 22:56:39.253135] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:12.497 [2024-09-30 22:56:39.253182] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1304620 0 00:28:12.497 [2024-09-30 22:56:39.266907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:12.497 [2024-09-30 22:56:39.266924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:12.497 [2024-09-30 22:56:39.266930] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:12.497 [2024-09-30 22:56:39.266934] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:12.497 [2024-09-30 22:56:39.266972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.266980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.266984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.497 [2024-09-30 22:56:39.267001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:12.497 [2024-09-30 22:56:39.267025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.497 [2024-09-30 22:56:39.274906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.497 [2024-09-30 22:56:39.274916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.497 [2024-09-30 22:56:39.274920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.274926] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.497 [2024-09-30 22:56:39.274941] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:12.497 [2024-09-30 22:56:39.274951] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:12.497 [2024-09-30 22:56:39.274956] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:12.497 [2024-09-30 22:56:39.274974] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.274979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.274982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.497 [2024-09-30 22:56:39.274991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.497 [2024-09-30 22:56:39.275007] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.497 [2024-09-30 22:56:39.275226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.497 [2024-09-30 22:56:39.275234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.497 [2024-09-30 22:56:39.275237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.497 [2024-09-30 22:56:39.275247] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:12.497 [2024-09-30 22:56:39.275255] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:12.497 [2024-09-30 22:56:39.275262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275274] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.497 [2024-09-30 22:56:39.275281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.497 [2024-09-30 22:56:39.275293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.497 [2024-09-30 22:56:39.275505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.497 [2024-09-30 22:56:39.275511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.497 [2024-09-30 22:56:39.275515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.497 [2024-09-30 22:56:39.275524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:12.497 [2024-09-30 22:56:39.275533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:12.497 [2024-09-30 22:56:39.275540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.497 [2024-09-30 22:56:39.275554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.497 [2024-09-30 22:56:39.275564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.497 [2024-09-30 22:56:39.275763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.497 [2024-09-30 22:56:39.275770] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.497 [2024-09-30 22:56:39.275773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.497 [2024-09-30 22:56:39.275782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:12.497 [2024-09-30 22:56:39.275792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.497 [2024-09-30 22:56:39.275799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.497 [2024-09-30 22:56:39.275806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.497 [2024-09-30 22:56:39.275816] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.497 [2024-09-30 22:56:39.276031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.497 [2024-09-30 22:56:39.276038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.276042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.276051] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:12.498 [2024-09-30 22:56:39.276056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:12.498 [2024-09-30 22:56:39.276064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:12.498 [2024-09-30 22:56:39.276170] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:12.498 [2024-09-30 22:56:39.276174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:12.498 [2024-09-30 22:56:39.276187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.276202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.498 [2024-09-30 22:56:39.276213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.498 [2024-09-30 22:56:39.276389] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.498 [2024-09-30 22:56:39.276396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.276399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.276408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:12.498 [2024-09-30 22:56:39.276417] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.276431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.498 [2024-09-30 22:56:39.276441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.498 [2024-09-30 22:56:39.276654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.498 [2024-09-30 22:56:39.276660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.276664] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276667] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.276672] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:12.498 [2024-09-30 22:56:39.276677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.276685] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:12.498 [2024-09-30 22:56:39.276694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.276704] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276708] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.276715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.498 [2024-09-30 22:56:39.276726] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.498 [2024-09-30 22:56:39.276983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.498 [2024-09-30 22:56:39.276990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.498 [2024-09-30 22:56:39.276994] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.276999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1304620): datao=0, datal=4096, cccid=0 00:28:12.498 [2024-09-30 22:56:39.277004] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1364480) on tqpair(0x1304620): expected_datao=0, payload_size=4096 00:28:12.498 [2024-09-30 22:56:39.277011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.277025] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.277030] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.319901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.498 [2024-09-30 22:56:39.319916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.319920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.319925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.319935] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:12.498 [2024-09-30 22:56:39.319941] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:12.498 [2024-09-30 22:56:39.319945] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:12.498 [2024-09-30 22:56:39.319951] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:12.498 [2024-09-30 22:56:39.319956] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:12.498 [2024-09-30 22:56:39.319961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.319971] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.319979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.319983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.319987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.319995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.498 [2024-09-30 22:56:39.320010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.498 [2024-09-30 22:56:39.320201] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.498 [2024-09-30 22:56:39.320208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.320212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.320225] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320229] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320233] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.320239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.498 [2024-09-30 22:56:39.320246] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320253] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.320259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.498 [2024-09-30 22:56:39.320265] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320269] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320273] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.320279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.498 [2024-09-30 22:56:39.320289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.320303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.498 [2024-09-30 22:56:39.320308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.320321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:12.498 [2024-09-30 22:56:39.320328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320332] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1304620) 00:28:12.498 [2024-09-30 22:56:39.320339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.498 [2024-09-30 22:56:39.320352] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364480, cid 0, qid 0 00:28:12.498 [2024-09-30 22:56:39.320358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364600, cid 1, qid 0 00:28:12.498 [2024-09-30 22:56:39.320362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364780, cid 2, qid 0 00:28:12.498 [2024-09-30 22:56:39.320367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.498 [2024-09-30 22:56:39.320372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364a80, cid 4, qid 0 00:28:12.498 [2024-09-30 22:56:39.320639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.498 [2024-09-30 22:56:39.320646] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.498 [2024-09-30 22:56:39.320649] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.498 [2024-09-30 22:56:39.320653] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364a80) on tqpair=0x1304620 00:28:12.498 [2024-09-30 22:56:39.320659] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:12.498 [2024-09-30 22:56:39.320665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:12.499 [2024-09-30 22:56:39.320677] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.320680] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1304620) 00:28:12.499 [2024-09-30 22:56:39.320687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.499 [2024-09-30 22:56:39.320697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364a80, cid 4, qid 0 00:28:12.499 [2024-09-30 22:56:39.320932] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.499 [2024-09-30 22:56:39.320939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.499 [2024-09-30 22:56:39.320943] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.320947] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1304620): datao=0, datal=4096, cccid=4 00:28:12.499 [2024-09-30 22:56:39.320951] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1364a80) on tqpair(0x1304620): expected_datao=0, payload_size=4096 00:28:12.499 [2024-09-30 22:56:39.320956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.320983] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.320987] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.499 [2024-09-30 22:56:39.321160] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.499 [2024-09-30 22:56:39.321163] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364a80) on tqpair=0x1304620 00:28:12.499 [2024-09-30 22:56:39.321182] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:12.499 [2024-09-30 22:56:39.321217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1304620) 00:28:12.499 [2024-09-30 22:56:39.321228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.499 [2024-09-30 22:56:39.321235] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321239] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1304620) 00:28:12.499 [2024-09-30 22:56:39.321249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.499 [2024-09-30 22:56:39.321262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364a80, cid 4, qid 0 00:28:12.499 [2024-09-30 22:56:39.321267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364c00, cid 5, qid 0 00:28:12.499 [2024-09-30 22:56:39.321500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.499 [2024-09-30 22:56:39.321507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.499 [2024-09-30 22:56:39.321510] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321514] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1304620): datao=0, datal=1024, cccid=4 00:28:12.499 [2024-09-30 22:56:39.321519] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1364a80) on tqpair(0x1304620): expected_datao=0, payload_size=1024 00:28:12.499 [2024-09-30 22:56:39.321523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321530] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321533] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321539] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.499 [2024-09-30 22:56:39.321545] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.499 [2024-09-30 22:56:39.321548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.321552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364c00) on tqpair=0x1304620 00:28:12.499 [2024-09-30 22:56:39.365902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.499 [2024-09-30 22:56:39.365912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.499 [2024-09-30 22:56:39.365916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.365920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364a80) on tqpair=0x1304620 00:28:12.499 [2024-09-30 22:56:39.365937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.365942] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1304620) 00:28:12.499 [2024-09-30 22:56:39.365948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.499 [2024-09-30 22:56:39.365965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364a80, cid 4, qid 0 00:28:12.499 [2024-09-30 22:56:39.366157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.499 [2024-09-30 22:56:39.366164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.499 [2024-09-30 22:56:39.366167] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.366175] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1304620): datao=0, datal=3072, cccid=4 00:28:12.499 [2024-09-30 22:56:39.366179] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1364a80) on tqpair(0x1304620): expected_datao=0, payload_size=3072 00:28:12.499 [2024-09-30 22:56:39.366184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.366200] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.366204] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.409906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.499 [2024-09-30 22:56:39.409918] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.499 [2024-09-30 22:56:39.409922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.409926] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364a80) on tqpair=0x1304620 00:28:12.499 [2024-09-30 22:56:39.409935] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.409940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1304620) 00:28:12.499 [2024-09-30 22:56:39.409946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.499 [2024-09-30 22:56:39.409962] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364a80, cid 4, qid 0 00:28:12.499 [2024-09-30 22:56:39.410216] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.499 [2024-09-30 22:56:39.410222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.499 [2024-09-30 22:56:39.410226] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.410229] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1304620): datao=0, datal=8, cccid=4 00:28:12.499 [2024-09-30 22:56:39.410234] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1364a80) on tqpair(0x1304620): expected_datao=0, payload_size=8 00:28:12.499 [2024-09-30 22:56:39.410238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.410245] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.410248] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.452096] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.499 [2024-09-30 22:56:39.452106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.499 [2024-09-30 22:56:39.452109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.499 [2024-09-30 22:56:39.452113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364a80) on tqpair=0x1304620 00:28:12.499 ===================================================== 00:28:12.499 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:12.499 ===================================================== 00:28:12.499 Controller Capabilities/Features 00:28:12.499 ================================ 00:28:12.499 Vendor ID: 0000 00:28:12.499 Subsystem Vendor ID: 0000 00:28:12.499 Serial Number: .................... 00:28:12.499 Model Number: ........................................ 00:28:12.499 Firmware Version: 25.01 00:28:12.499 Recommended Arb Burst: 0 00:28:12.499 IEEE OUI Identifier: 00 00 00 00:28:12.499 Multi-path I/O 00:28:12.499 May have multiple subsystem ports: No 00:28:12.499 May have multiple controllers: No 00:28:12.499 Associated with SR-IOV VF: No 00:28:12.499 Max Data Transfer Size: 131072 00:28:12.499 Max Number of Namespaces: 0 00:28:12.499 Max Number of I/O Queues: 1024 00:28:12.499 NVMe Specification Version (VS): 1.3 00:28:12.499 NVMe Specification Version (Identify): 1.3 00:28:12.499 Maximum Queue Entries: 128 00:28:12.499 Contiguous Queues Required: Yes 00:28:12.499 Arbitration Mechanisms Supported 00:28:12.499 Weighted Round Robin: Not Supported 00:28:12.499 Vendor Specific: Not Supported 00:28:12.499 Reset Timeout: 15000 ms 00:28:12.499 Doorbell Stride: 4 bytes 00:28:12.499 NVM Subsystem Reset: Not Supported 00:28:12.499 Command Sets Supported 00:28:12.499 NVM Command Set: Supported 00:28:12.499 Boot Partition: Not Supported 00:28:12.499 Memory Page Size Minimum: 4096 bytes 00:28:12.499 Memory Page Size Maximum: 4096 bytes 00:28:12.499 Persistent Memory Region: Not Supported 00:28:12.499 Optional Asynchronous Events Supported 00:28:12.499 Namespace Attribute Notices: Not Supported 00:28:12.499 Firmware Activation Notices: Not Supported 00:28:12.499 ANA Change Notices: Not Supported 00:28:12.499 PLE Aggregate Log Change Notices: Not Supported 00:28:12.499 LBA Status Info Alert Notices: Not Supported 00:28:12.499 EGE Aggregate Log Change Notices: Not Supported 00:28:12.499 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.499 Zone Descriptor Change Notices: Not Supported 00:28:12.499 Discovery Log Change Notices: Supported 00:28:12.499 Controller Attributes 00:28:12.499 128-bit Host Identifier: Not Supported 00:28:12.499 Non-Operational Permissive Mode: Not Supported 00:28:12.499 NVM Sets: Not Supported 00:28:12.499 Read Recovery Levels: Not Supported 00:28:12.499 Endurance Groups: Not Supported 00:28:12.499 Predictable Latency Mode: Not Supported 00:28:12.500 Traffic Based Keep ALive: Not Supported 00:28:12.500 Namespace Granularity: Not Supported 00:28:12.500 SQ Associations: Not Supported 00:28:12.500 UUID List: Not Supported 00:28:12.500 Multi-Domain Subsystem: Not Supported 00:28:12.500 Fixed Capacity Management: Not Supported 00:28:12.500 Variable Capacity Management: Not Supported 00:28:12.500 Delete Endurance Group: Not Supported 00:28:12.500 Delete NVM Set: Not Supported 00:28:12.500 Extended LBA Formats Supported: Not Supported 00:28:12.500 Flexible Data Placement Supported: Not Supported 00:28:12.500 00:28:12.500 Controller Memory Buffer Support 00:28:12.500 ================================ 00:28:12.500 Supported: No 00:28:12.500 00:28:12.500 Persistent Memory Region Support 00:28:12.500 ================================ 00:28:12.500 Supported: No 00:28:12.500 00:28:12.500 Admin Command Set Attributes 00:28:12.500 ============================ 00:28:12.500 Security Send/Receive: Not Supported 00:28:12.500 Format NVM: Not Supported 00:28:12.500 Firmware Activate/Download: Not Supported 00:28:12.500 Namespace Management: Not Supported 00:28:12.500 Device Self-Test: Not Supported 00:28:12.500 Directives: Not Supported 00:28:12.500 NVMe-MI: Not Supported 00:28:12.500 Virtualization Management: Not Supported 00:28:12.500 Doorbell Buffer Config: Not Supported 00:28:12.500 Get LBA Status Capability: Not Supported 00:28:12.500 Command & Feature Lockdown Capability: Not Supported 00:28:12.500 Abort Command Limit: 1 00:28:12.500 Async Event Request Limit: 4 00:28:12.500 Number of Firmware Slots: N/A 00:28:12.500 Firmware Slot 1 Read-Only: N/A 00:28:12.500 Firmware Activation Without Reset: N/A 00:28:12.500 Multiple Update Detection Support: N/A 00:28:12.500 Firmware Update Granularity: No Information Provided 00:28:12.500 Per-Namespace SMART Log: No 00:28:12.500 Asymmetric Namespace Access Log Page: Not Supported 00:28:12.500 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:12.500 Command Effects Log Page: Not Supported 00:28:12.500 Get Log Page Extended Data: Supported 00:28:12.500 Telemetry Log Pages: Not Supported 00:28:12.500 Persistent Event Log Pages: Not Supported 00:28:12.500 Supported Log Pages Log Page: May Support 00:28:12.500 Commands Supported & Effects Log Page: Not Supported 00:28:12.500 Feature Identifiers & Effects Log Page:May Support 00:28:12.500 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.500 Data Area 4 for Telemetry Log: Not Supported 00:28:12.500 Error Log Page Entries Supported: 128 00:28:12.500 Keep Alive: Not Supported 00:28:12.500 00:28:12.500 NVM Command Set Attributes 00:28:12.500 ========================== 00:28:12.500 Submission Queue Entry Size 00:28:12.500 Max: 1 00:28:12.500 Min: 1 00:28:12.500 Completion Queue Entry Size 00:28:12.500 Max: 1 00:28:12.500 Min: 1 00:28:12.500 Number of Namespaces: 0 00:28:12.500 Compare Command: Not Supported 00:28:12.500 Write Uncorrectable Command: Not Supported 00:28:12.500 Dataset Management Command: Not Supported 00:28:12.500 Write Zeroes Command: Not Supported 00:28:12.500 Set Features Save Field: Not Supported 00:28:12.500 Reservations: Not Supported 00:28:12.500 Timestamp: Not Supported 00:28:12.500 Copy: Not Supported 00:28:12.500 Volatile Write Cache: Not Present 00:28:12.500 Atomic Write Unit (Normal): 1 00:28:12.500 Atomic Write Unit (PFail): 1 00:28:12.500 Atomic Compare & Write Unit: 1 00:28:12.500 Fused Compare & Write: Supported 00:28:12.500 Scatter-Gather List 00:28:12.500 SGL Command Set: Supported 00:28:12.500 SGL Keyed: Supported 00:28:12.500 SGL Bit Bucket Descriptor: Not Supported 00:28:12.500 SGL Metadata Pointer: Not Supported 00:28:12.500 Oversized SGL: Not Supported 00:28:12.500 SGL Metadata Address: Not Supported 00:28:12.500 SGL Offset: Supported 00:28:12.500 Transport SGL Data Block: Not Supported 00:28:12.500 Replay Protected Memory Block: Not Supported 00:28:12.500 00:28:12.500 Firmware Slot Information 00:28:12.500 ========================= 00:28:12.500 Active slot: 0 00:28:12.500 00:28:12.500 00:28:12.500 Error Log 00:28:12.500 ========= 00:28:12.500 00:28:12.500 Active Namespaces 00:28:12.500 ================= 00:28:12.500 Discovery Log Page 00:28:12.500 ================== 00:28:12.500 Generation Counter: 2 00:28:12.500 Number of Records: 2 00:28:12.500 Record Format: 0 00:28:12.500 00:28:12.500 Discovery Log Entry 0 00:28:12.500 ---------------------- 00:28:12.500 Transport Type: 3 (TCP) 00:28:12.500 Address Family: 1 (IPv4) 00:28:12.500 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:12.500 Entry Flags: 00:28:12.500 Duplicate Returned Information: 1 00:28:12.500 Explicit Persistent Connection Support for Discovery: 1 00:28:12.500 Transport Requirements: 00:28:12.500 Secure Channel: Not Required 00:28:12.500 Port ID: 0 (0x0000) 00:28:12.500 Controller ID: 65535 (0xffff) 00:28:12.500 Admin Max SQ Size: 128 00:28:12.500 Transport Service Identifier: 4420 00:28:12.500 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:12.500 Transport Address: 10.0.0.2 00:28:12.500 Discovery Log Entry 1 00:28:12.500 ---------------------- 00:28:12.500 Transport Type: 3 (TCP) 00:28:12.500 Address Family: 1 (IPv4) 00:28:12.500 Subsystem Type: 2 (NVM Subsystem) 00:28:12.500 Entry Flags: 00:28:12.500 Duplicate Returned Information: 0 00:28:12.500 Explicit Persistent Connection Support for Discovery: 0 00:28:12.500 Transport Requirements: 00:28:12.500 Secure Channel: Not Required 00:28:12.500 Port ID: 0 (0x0000) 00:28:12.500 Controller ID: 65535 (0xffff) 00:28:12.500 Admin Max SQ Size: 128 00:28:12.500 Transport Service Identifier: 4420 00:28:12.500 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:12.500 Transport Address: 10.0.0.2 [2024-09-30 22:56:39.452214] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:12.500 [2024-09-30 22:56:39.452227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364480) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.500 [2024-09-30 22:56:39.452240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364600) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.500 [2024-09-30 22:56:39.452251] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364780) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.500 [2024-09-30 22:56:39.452261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.500 [2024-09-30 22:56:39.452277] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.500 [2024-09-30 22:56:39.452292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.500 [2024-09-30 22:56:39.452307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.500 [2024-09-30 22:56:39.452427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.500 [2024-09-30 22:56:39.452434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.500 [2024-09-30 22:56:39.452437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452449] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452456] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.500 [2024-09-30 22:56:39.452463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.500 [2024-09-30 22:56:39.452477] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.500 [2024-09-30 22:56:39.452726] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.500 [2024-09-30 22:56:39.452733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.500 [2024-09-30 22:56:39.452736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.500 [2024-09-30 22:56:39.452746] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:12.500 [2024-09-30 22:56:39.452754] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:12.500 [2024-09-30 22:56:39.452764] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.500 [2024-09-30 22:56:39.452768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.452771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.501 [2024-09-30 22:56:39.452778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.501 [2024-09-30 22:56:39.452789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.501 [2024-09-30 22:56:39.452979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.501 [2024-09-30 22:56:39.452986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.501 [2024-09-30 22:56:39.452989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.452993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.501 [2024-09-30 22:56:39.453004] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.501 [2024-09-30 22:56:39.453018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.501 [2024-09-30 22:56:39.453029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.501 [2024-09-30 22:56:39.453243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.501 [2024-09-30 22:56:39.453249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.501 [2024-09-30 22:56:39.453255] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453259] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.501 [2024-09-30 22:56:39.453269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453277] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.501 [2024-09-30 22:56:39.453284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.501 [2024-09-30 22:56:39.453294] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.501 [2024-09-30 22:56:39.453532] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.501 [2024-09-30 22:56:39.453538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.501 [2024-09-30 22:56:39.453541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.501 [2024-09-30 22:56:39.453555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.501 [2024-09-30 22:56:39.453569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.501 [2024-09-30 22:56:39.453579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.501 [2024-09-30 22:56:39.453835] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.501 [2024-09-30 22:56:39.453842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.501 [2024-09-30 22:56:39.453845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.501 [2024-09-30 22:56:39.453859] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.453867] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1304620) 00:28:12.501 [2024-09-30 22:56:39.453873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.501 [2024-09-30 22:56:39.453883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1364900, cid 3, qid 0 00:28:12.501 [2024-09-30 22:56:39.457901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.501 [2024-09-30 22:56:39.457909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.501 [2024-09-30 22:56:39.457913] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.501 [2024-09-30 22:56:39.457917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1364900) on tqpair=0x1304620 00:28:12.501 [2024-09-30 22:56:39.457925] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:12.501 00:28:12.501 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:12.501 [2024-09-30 22:56:39.505593] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:28:12.501 [2024-09-30 22:56:39.505638] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802745 ] 00:28:12.767 [2024-09-30 22:56:39.538892] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:12.767 [2024-09-30 22:56:39.542956] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:12.767 [2024-09-30 22:56:39.542962] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:12.767 [2024-09-30 22:56:39.542977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:12.767 [2024-09-30 22:56:39.542988] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:12.767 [2024-09-30 22:56:39.543706] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:12.767 [2024-09-30 22:56:39.543744] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc95620 0 00:28:12.767 [2024-09-30 22:56:39.549913] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:12.767 [2024-09-30 22:56:39.549930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:12.767 [2024-09-30 22:56:39.549935] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:12.767 [2024-09-30 22:56:39.549939] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:12.767 [2024-09-30 22:56:39.549973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.549979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.549983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.549998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:12.767 [2024-09-30 22:56:39.550023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.557906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.557915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.557919] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.557924] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.557937] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:12.767 [2024-09-30 22:56:39.557946] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:12.767 [2024-09-30 22:56:39.557951] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:12.767 [2024-09-30 22:56:39.557966] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.557970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.557973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.557982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.767 [2024-09-30 22:56:39.557997] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.558235] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.558241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.558245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558249] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.558254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:12.767 [2024-09-30 22:56:39.558262] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:12.767 [2024-09-30 22:56:39.558274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.558289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.767 [2024-09-30 22:56:39.558300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.558512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.558519] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.558522] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.558532] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:12.767 [2024-09-30 22:56:39.558541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.558547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.558562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.767 [2024-09-30 22:56:39.558572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.558754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.558761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.558764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.558773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.558783] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558787] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.558791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.558797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.767 [2024-09-30 22:56:39.558808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.559015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.559022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.559025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.559029] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.559034] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:12.767 [2024-09-30 22:56:39.559039] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.559047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.559152] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:12.767 [2024-09-30 22:56:39.559159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.559167] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.559171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.559175] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.767 [2024-09-30 22:56:39.559182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.767 [2024-09-30 22:56:39.559193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.767 [2024-09-30 22:56:39.559380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.767 [2024-09-30 22:56:39.559387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.767 [2024-09-30 22:56:39.559390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.559394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.767 [2024-09-30 22:56:39.559399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:12.767 [2024-09-30 22:56:39.559408] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.767 [2024-09-30 22:56:39.559412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.559415] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.559422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.768 [2024-09-30 22:56:39.559432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.768 [2024-09-30 22:56:39.559630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.768 [2024-09-30 22:56:39.559637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.768 [2024-09-30 22:56:39.559640] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.559644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.768 [2024-09-30 22:56:39.559649] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:12.768 [2024-09-30 22:56:39.559653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.559661] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:12.768 [2024-09-30 22:56:39.559669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.559679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.559682] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.559689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.768 [2024-09-30 22:56:39.559700] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.768 [2024-09-30 22:56:39.559967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.768 [2024-09-30 22:56:39.559974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.768 [2024-09-30 22:56:39.559978] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.559982] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=4096, cccid=0 00:28:12.768 [2024-09-30 22:56:39.559988] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5480) on tqpair(0xc95620): expected_datao=0, payload_size=4096 00:28:12.768 [2024-09-30 22:56:39.559994] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.560003] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.560007] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.768 [2024-09-30 22:56:39.601094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.768 [2024-09-30 22:56:39.601097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601102] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.768 [2024-09-30 22:56:39.601111] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:12.768 [2024-09-30 22:56:39.601117] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:12.768 [2024-09-30 22:56:39.601121] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:12.768 [2024-09-30 22:56:39.601125] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:12.768 [2024-09-30 22:56:39.601130] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:12.768 [2024-09-30 22:56:39.601135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601150] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.768 [2024-09-30 22:56:39.601179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.768 [2024-09-30 22:56:39.601393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.768 [2024-09-30 22:56:39.601399] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.768 [2024-09-30 22:56:39.601402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.768 [2024-09-30 22:56:39.601414] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.768 [2024-09-30 22:56:39.601434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.768 [2024-09-30 22:56:39.601454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.768 [2024-09-30 22:56:39.601477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.768 [2024-09-30 22:56:39.601495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.768 [2024-09-30 22:56:39.601537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5480, cid 0, qid 0 00:28:12.768 [2024-09-30 22:56:39.601542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5600, cid 1, qid 0 00:28:12.768 [2024-09-30 22:56:39.601547] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5780, cid 2, qid 0 00:28:12.768 [2024-09-30 22:56:39.601552] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.768 [2024-09-30 22:56:39.601557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.768 [2024-09-30 22:56:39.601794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.768 [2024-09-30 22:56:39.601801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.768 [2024-09-30 22:56:39.601804] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601808] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.768 [2024-09-30 22:56:39.601813] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:12.768 [2024-09-30 22:56:39.601818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.601843] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.601851] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.768 [2024-09-30 22:56:39.601858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:12.768 [2024-09-30 22:56:39.601868] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.768 [2024-09-30 22:56:39.605901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.768 [2024-09-30 22:56:39.605909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.768 [2024-09-30 22:56:39.605913] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.605917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.768 [2024-09-30 22:56:39.605987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.606000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:12.768 [2024-09-30 22:56:39.606008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.768 [2024-09-30 22:56:39.606012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.606018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.769 [2024-09-30 22:56:39.606031] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.769 [2024-09-30 22:56:39.606261] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.769 [2024-09-30 22:56:39.606268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.769 [2024-09-30 22:56:39.606272] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.606276] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=4096, cccid=4 00:28:12.769 [2024-09-30 22:56:39.606280] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5a80) on tqpair(0xc95620): expected_datao=0, payload_size=4096 00:28:12.769 [2024-09-30 22:56:39.606285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.606299] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.606303] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647057] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.647066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.647070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.769 [2024-09-30 22:56:39.647087] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:12.769 [2024-09-30 22:56:39.647111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.647123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.647130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.647141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.769 [2024-09-30 22:56:39.647154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.769 [2024-09-30 22:56:39.647386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.769 [2024-09-30 22:56:39.647393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.769 [2024-09-30 22:56:39.647396] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647400] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=4096, cccid=4 00:28:12.769 [2024-09-30 22:56:39.647405] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5a80) on tqpair(0xc95620): expected_datao=0, payload_size=4096 00:28:12.769 [2024-09-30 22:56:39.647409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647416] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647420] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.647567] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.647570] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.769 [2024-09-30 22:56:39.647591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.647602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.647609] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647613] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.647620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.769 [2024-09-30 22:56:39.647631] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.769 [2024-09-30 22:56:39.647852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.769 [2024-09-30 22:56:39.647858] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.769 [2024-09-30 22:56:39.647862] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647865] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=4096, cccid=4 00:28:12.769 [2024-09-30 22:56:39.647870] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5a80) on tqpair(0xc95620): expected_datao=0, payload_size=4096 00:28:12.769 [2024-09-30 22:56:39.647874] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647890] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.647900] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.648086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.648090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.769 [2024-09-30 22:56:39.648102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648143] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:12.769 [2024-09-30 22:56:39.648148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:12.769 [2024-09-30 22:56:39.648154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:12.769 [2024-09-30 22:56:39.648171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648175] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.648182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.769 [2024-09-30 22:56:39.648192] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648196] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.648206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.769 [2024-09-30 22:56:39.648218] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.769 [2024-09-30 22:56:39.648224] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5c00, cid 5, qid 0 00:28:12.769 [2024-09-30 22:56:39.648319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.648326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.648329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.769 [2024-09-30 22:56:39.648340] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.648346] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.648349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5c00) on tqpair=0xc95620 00:28:12.769 [2024-09-30 22:56:39.648362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.769 [2024-09-30 22:56:39.648366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc95620) 00:28:12.769 [2024-09-30 22:56:39.648373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.769 [2024-09-30 22:56:39.648383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5c00, cid 5, qid 0 00:28:12.769 [2024-09-30 22:56:39.648458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.769 [2024-09-30 22:56:39.648464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.769 [2024-09-30 22:56:39.648468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5c00) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.648482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648503] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5c00, cid 5, qid 0 00:28:12.770 [2024-09-30 22:56:39.648581] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.648588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.648591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648595] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5c00) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.648604] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648608] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648625] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5c00, cid 5, qid 0 00:28:12.770 [2024-09-30 22:56:39.648706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.648715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.648719] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648723] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5c00) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.648738] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.648798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc95620) 00:28:12.770 [2024-09-30 22:56:39.648804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.770 [2024-09-30 22:56:39.648816] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5c00, cid 5, qid 0 00:28:12.770 [2024-09-30 22:56:39.648821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5a80, cid 4, qid 0 00:28:12.770 [2024-09-30 22:56:39.648826] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5d80, cid 6, qid 0 00:28:12.770 [2024-09-30 22:56:39.648831] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5f00, cid 7, qid 0 00:28:12.770 [2024-09-30 22:56:39.649107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.770 [2024-09-30 22:56:39.649114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.770 [2024-09-30 22:56:39.649118] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=8192, cccid=5 00:28:12.770 [2024-09-30 22:56:39.649127] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5c00) on tqpair(0xc95620): expected_datao=0, payload_size=8192 00:28:12.770 [2024-09-30 22:56:39.649131] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649212] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649216] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.770 [2024-09-30 22:56:39.649228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.770 [2024-09-30 22:56:39.649231] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=512, cccid=4 00:28:12.770 [2024-09-30 22:56:39.649240] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5a80) on tqpair(0xc95620): expected_datao=0, payload_size=512 00:28:12.770 [2024-09-30 22:56:39.649244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649251] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649256] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649262] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.770 [2024-09-30 22:56:39.649268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.770 [2024-09-30 22:56:39.649272] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=512, cccid=6 00:28:12.770 [2024-09-30 22:56:39.649280] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5d80) on tqpair(0xc95620): expected_datao=0, payload_size=512 00:28:12.770 [2024-09-30 22:56:39.649284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649291] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649294] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649300] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:12.770 [2024-09-30 22:56:39.649306] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:12.770 [2024-09-30 22:56:39.649309] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc95620): datao=0, datal=4096, cccid=7 00:28:12.770 [2024-09-30 22:56:39.649317] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5f00) on tqpair(0xc95620): expected_datao=0, payload_size=4096 00:28:12.770 [2024-09-30 22:56:39.649322] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649329] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649332] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.649353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.649356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5c00) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.649373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.649379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.649382] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649386] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5a80) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.649397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.649403] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.649406] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5d80) on tqpair=0xc95620 00:28:12.770 [2024-09-30 22:56:39.649417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.770 [2024-09-30 22:56:39.649423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.770 [2024-09-30 22:56:39.649426] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.770 [2024-09-30 22:56:39.649430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5f00) on tqpair=0xc95620 00:28:12.770 ===================================================== 00:28:12.770 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.770 ===================================================== 00:28:12.770 Controller Capabilities/Features 00:28:12.770 ================================ 00:28:12.770 Vendor ID: 8086 00:28:12.770 Subsystem Vendor ID: 8086 00:28:12.770 Serial Number: SPDK00000000000001 00:28:12.770 Model Number: SPDK bdev Controller 00:28:12.771 Firmware Version: 25.01 00:28:12.771 Recommended Arb Burst: 6 00:28:12.771 IEEE OUI Identifier: e4 d2 5c 00:28:12.771 Multi-path I/O 00:28:12.771 May have multiple subsystem ports: Yes 00:28:12.771 May have multiple controllers: Yes 00:28:12.771 Associated with SR-IOV VF: No 00:28:12.771 Max Data Transfer Size: 131072 00:28:12.771 Max Number of Namespaces: 32 00:28:12.771 Max Number of I/O Queues: 127 00:28:12.771 NVMe Specification Version (VS): 1.3 00:28:12.771 NVMe Specification Version (Identify): 1.3 00:28:12.771 Maximum Queue Entries: 128 00:28:12.771 Contiguous Queues Required: Yes 00:28:12.771 Arbitration Mechanisms Supported 00:28:12.771 Weighted Round Robin: Not Supported 00:28:12.771 Vendor Specific: Not Supported 00:28:12.771 Reset Timeout: 15000 ms 00:28:12.771 Doorbell Stride: 4 bytes 00:28:12.771 NVM Subsystem Reset: Not Supported 00:28:12.771 Command Sets Supported 00:28:12.771 NVM Command Set: Supported 00:28:12.771 Boot Partition: Not Supported 00:28:12.771 Memory Page Size Minimum: 4096 bytes 00:28:12.771 Memory Page Size Maximum: 4096 bytes 00:28:12.771 Persistent Memory Region: Not Supported 00:28:12.771 Optional Asynchronous Events Supported 00:28:12.771 Namespace Attribute Notices: Supported 00:28:12.771 Firmware Activation Notices: Not Supported 00:28:12.771 ANA Change Notices: Not Supported 00:28:12.771 PLE Aggregate Log Change Notices: Not Supported 00:28:12.771 LBA Status Info Alert Notices: Not Supported 00:28:12.771 EGE Aggregate Log Change Notices: Not Supported 00:28:12.771 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.771 Zone Descriptor Change Notices: Not Supported 00:28:12.771 Discovery Log Change Notices: Not Supported 00:28:12.771 Controller Attributes 00:28:12.771 128-bit Host Identifier: Supported 00:28:12.771 Non-Operational Permissive Mode: Not Supported 00:28:12.771 NVM Sets: Not Supported 00:28:12.771 Read Recovery Levels: Not Supported 00:28:12.771 Endurance Groups: Not Supported 00:28:12.771 Predictable Latency Mode: Not Supported 00:28:12.771 Traffic Based Keep ALive: Not Supported 00:28:12.771 Namespace Granularity: Not Supported 00:28:12.771 SQ Associations: Not Supported 00:28:12.771 UUID List: Not Supported 00:28:12.771 Multi-Domain Subsystem: Not Supported 00:28:12.771 Fixed Capacity Management: Not Supported 00:28:12.771 Variable Capacity Management: Not Supported 00:28:12.771 Delete Endurance Group: Not Supported 00:28:12.771 Delete NVM Set: Not Supported 00:28:12.771 Extended LBA Formats Supported: Not Supported 00:28:12.771 Flexible Data Placement Supported: Not Supported 00:28:12.771 00:28:12.771 Controller Memory Buffer Support 00:28:12.771 ================================ 00:28:12.771 Supported: No 00:28:12.771 00:28:12.771 Persistent Memory Region Support 00:28:12.771 ================================ 00:28:12.771 Supported: No 00:28:12.771 00:28:12.771 Admin Command Set Attributes 00:28:12.771 ============================ 00:28:12.771 Security Send/Receive: Not Supported 00:28:12.771 Format NVM: Not Supported 00:28:12.771 Firmware Activate/Download: Not Supported 00:28:12.771 Namespace Management: Not Supported 00:28:12.771 Device Self-Test: Not Supported 00:28:12.771 Directives: Not Supported 00:28:12.771 NVMe-MI: Not Supported 00:28:12.771 Virtualization Management: Not Supported 00:28:12.771 Doorbell Buffer Config: Not Supported 00:28:12.771 Get LBA Status Capability: Not Supported 00:28:12.771 Command & Feature Lockdown Capability: Not Supported 00:28:12.771 Abort Command Limit: 4 00:28:12.771 Async Event Request Limit: 4 00:28:12.771 Number of Firmware Slots: N/A 00:28:12.771 Firmware Slot 1 Read-Only: N/A 00:28:12.771 Firmware Activation Without Reset: N/A 00:28:12.771 Multiple Update Detection Support: N/A 00:28:12.771 Firmware Update Granularity: No Information Provided 00:28:12.771 Per-Namespace SMART Log: No 00:28:12.771 Asymmetric Namespace Access Log Page: Not Supported 00:28:12.771 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:12.771 Command Effects Log Page: Supported 00:28:12.771 Get Log Page Extended Data: Supported 00:28:12.771 Telemetry Log Pages: Not Supported 00:28:12.771 Persistent Event Log Pages: Not Supported 00:28:12.771 Supported Log Pages Log Page: May Support 00:28:12.771 Commands Supported & Effects Log Page: Not Supported 00:28:12.771 Feature Identifiers & Effects Log Page:May Support 00:28:12.771 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.771 Data Area 4 for Telemetry Log: Not Supported 00:28:12.771 Error Log Page Entries Supported: 128 00:28:12.771 Keep Alive: Supported 00:28:12.771 Keep Alive Granularity: 10000 ms 00:28:12.771 00:28:12.771 NVM Command Set Attributes 00:28:12.771 ========================== 00:28:12.771 Submission Queue Entry Size 00:28:12.771 Max: 64 00:28:12.771 Min: 64 00:28:12.771 Completion Queue Entry Size 00:28:12.771 Max: 16 00:28:12.771 Min: 16 00:28:12.771 Number of Namespaces: 32 00:28:12.771 Compare Command: Supported 00:28:12.771 Write Uncorrectable Command: Not Supported 00:28:12.771 Dataset Management Command: Supported 00:28:12.771 Write Zeroes Command: Supported 00:28:12.771 Set Features Save Field: Not Supported 00:28:12.771 Reservations: Supported 00:28:12.771 Timestamp: Not Supported 00:28:12.771 Copy: Supported 00:28:12.771 Volatile Write Cache: Present 00:28:12.771 Atomic Write Unit (Normal): 1 00:28:12.771 Atomic Write Unit (PFail): 1 00:28:12.771 Atomic Compare & Write Unit: 1 00:28:12.771 Fused Compare & Write: Supported 00:28:12.771 Scatter-Gather List 00:28:12.771 SGL Command Set: Supported 00:28:12.771 SGL Keyed: Supported 00:28:12.771 SGL Bit Bucket Descriptor: Not Supported 00:28:12.771 SGL Metadata Pointer: Not Supported 00:28:12.771 Oversized SGL: Not Supported 00:28:12.771 SGL Metadata Address: Not Supported 00:28:12.771 SGL Offset: Supported 00:28:12.771 Transport SGL Data Block: Not Supported 00:28:12.771 Replay Protected Memory Block: Not Supported 00:28:12.771 00:28:12.771 Firmware Slot Information 00:28:12.771 ========================= 00:28:12.771 Active slot: 1 00:28:12.771 Slot 1 Firmware Revision: 25.01 00:28:12.771 00:28:12.771 00:28:12.771 Commands Supported and Effects 00:28:12.771 ============================== 00:28:12.771 Admin Commands 00:28:12.771 -------------- 00:28:12.771 Get Log Page (02h): Supported 00:28:12.771 Identify (06h): Supported 00:28:12.771 Abort (08h): Supported 00:28:12.771 Set Features (09h): Supported 00:28:12.771 Get Features (0Ah): Supported 00:28:12.771 Asynchronous Event Request (0Ch): Supported 00:28:12.771 Keep Alive (18h): Supported 00:28:12.771 I/O Commands 00:28:12.771 ------------ 00:28:12.771 Flush (00h): Supported LBA-Change 00:28:12.771 Write (01h): Supported LBA-Change 00:28:12.771 Read (02h): Supported 00:28:12.771 Compare (05h): Supported 00:28:12.771 Write Zeroes (08h): Supported LBA-Change 00:28:12.771 Dataset Management (09h): Supported LBA-Change 00:28:12.771 Copy (19h): Supported LBA-Change 00:28:12.771 00:28:12.771 Error Log 00:28:12.771 ========= 00:28:12.771 00:28:12.771 Arbitration 00:28:12.771 =========== 00:28:12.771 Arbitration Burst: 1 00:28:12.771 00:28:12.771 Power Management 00:28:12.771 ================ 00:28:12.771 Number of Power States: 1 00:28:12.771 Current Power State: Power State #0 00:28:12.771 Power State #0: 00:28:12.771 Max Power: 0.00 W 00:28:12.771 Non-Operational State: Operational 00:28:12.771 Entry Latency: Not Reported 00:28:12.771 Exit Latency: Not Reported 00:28:12.771 Relative Read Throughput: 0 00:28:12.771 Relative Read Latency: 0 00:28:12.771 Relative Write Throughput: 0 00:28:12.771 Relative Write Latency: 0 00:28:12.771 Idle Power: Not Reported 00:28:12.772 Active Power: Not Reported 00:28:12.772 Non-Operational Permissive Mode: Not Supported 00:28:12.772 00:28:12.772 Health Information 00:28:12.772 ================== 00:28:12.772 Critical Warnings: 00:28:12.772 Available Spare Space: OK 00:28:12.772 Temperature: OK 00:28:12.772 Device Reliability: OK 00:28:12.772 Read Only: No 00:28:12.772 Volatile Memory Backup: OK 00:28:12.772 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:12.772 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:12.772 Available Spare: 0% 00:28:12.772 Available Spare Threshold: 0% 00:28:12.772 Life Percentage Used:[2024-09-30 22:56:39.649534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.649539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.649546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.772 [2024-09-30 22:56:39.649558] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5f00, cid 7, qid 0 00:28:12.772 [2024-09-30 22:56:39.649782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.772 [2024-09-30 22:56:39.649788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.772 [2024-09-30 22:56:39.649794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.649798] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5f00) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.649831] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:12.772 [2024-09-30 22:56:39.649842] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5480) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.649848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.772 [2024-09-30 22:56:39.649854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5600) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.649859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.772 [2024-09-30 22:56:39.649864] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5780) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.649868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.772 [2024-09-30 22:56:39.649874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.772 [2024-09-30 22:56:39.649886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.649890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.653900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.653908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.772 [2024-09-30 22:56:39.653923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.772 [2024-09-30 22:56:39.654140] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.772 [2024-09-30 22:56:39.654147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.772 [2024-09-30 22:56:39.654150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.654161] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654169] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.654176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.772 [2024-09-30 22:56:39.654189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.772 [2024-09-30 22:56:39.654422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.772 [2024-09-30 22:56:39.654428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.772 [2024-09-30 22:56:39.654432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654436] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.654440] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:12.772 [2024-09-30 22:56:39.654445] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:12.772 [2024-09-30 22:56:39.654454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654462] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.654471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.772 [2024-09-30 22:56:39.654482] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.772 [2024-09-30 22:56:39.654715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.772 [2024-09-30 22:56:39.654722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.772 [2024-09-30 22:56:39.654725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.654739] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.654754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.772 [2024-09-30 22:56:39.654764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.772 [2024-09-30 22:56:39.654964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.772 [2024-09-30 22:56:39.654970] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.772 [2024-09-30 22:56:39.654974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.772 [2024-09-30 22:56:39.654988] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.772 [2024-09-30 22:56:39.654996] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.772 [2024-09-30 22:56:39.655002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.655013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.655207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.655213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.655216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.655230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655234] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.655245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.655255] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.655453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.655460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.655463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.655477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.655492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.655504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.655692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.655698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.655701] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655705] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.655716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655723] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.655730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.655740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.655949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.655956] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.655959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.655973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.655981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.655988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.655998] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.656227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.656233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.656236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.656250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656258] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.656265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.656275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.656470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.656476] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.656480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.656493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656497] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656501] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.656508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.656518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.656725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.656731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.656735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.656749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656752] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656756] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.656763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.656773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.656961] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.656968] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.656971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.656985] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.656993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.657000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.657010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.657215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.657221] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.657224] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657228] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.657238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.657253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.773 [2024-09-30 22:56:39.657263] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.773 [2024-09-30 22:56:39.657477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.773 [2024-09-30 22:56:39.657483] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.773 [2024-09-30 22:56:39.657487] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657491] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.773 [2024-09-30 22:56:39.657500] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657504] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.773 [2024-09-30 22:56:39.657508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.773 [2024-09-30 22:56:39.657515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.774 [2024-09-30 22:56:39.657525] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.774 [2024-09-30 22:56:39.657774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.774 [2024-09-30 22:56:39.657782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.774 [2024-09-30 22:56:39.657786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.774 [2024-09-30 22:56:39.657790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.774 [2024-09-30 22:56:39.657800] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:12.774 [2024-09-30 22:56:39.657804] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:12.774 [2024-09-30 22:56:39.657808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc95620) 00:28:12.774 [2024-09-30 22:56:39.657815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.774 [2024-09-30 22:56:39.657826] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5900, cid 3, qid 0 00:28:12.774 [2024-09-30 22:56:39.661901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:12.774 [2024-09-30 22:56:39.661909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:12.774 [2024-09-30 22:56:39.661912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:12.774 [2024-09-30 22:56:39.661916] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5900) on tqpair=0xc95620 00:28:12.774 [2024-09-30 22:56:39.661924] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:12.774 0% 00:28:12.774 Data Units Read: 0 00:28:12.774 Data Units Written: 0 00:28:12.774 Host Read Commands: 0 00:28:12.774 Host Write Commands: 0 00:28:12.774 Controller Busy Time: 0 minutes 00:28:12.774 Power Cycles: 0 00:28:12.774 Power On Hours: 0 hours 00:28:12.774 Unsafe Shutdowns: 0 00:28:12.774 Unrecoverable Media Errors: 0 00:28:12.774 Lifetime Error Log Entries: 0 00:28:12.774 Warning Temperature Time: 0 minutes 00:28:12.774 Critical Temperature Time: 0 minutes 00:28:12.774 00:28:12.774 Number of Queues 00:28:12.774 ================ 00:28:12.774 Number of I/O Submission Queues: 127 00:28:12.774 Number of I/O Completion Queues: 127 00:28:12.774 00:28:12.774 Active Namespaces 00:28:12.774 ================= 00:28:12.774 Namespace ID:1 00:28:12.774 Error Recovery Timeout: Unlimited 00:28:12.774 Command Set Identifier: NVM (00h) 00:28:12.774 Deallocate: Supported 00:28:12.774 Deallocated/Unwritten Error: Not Supported 00:28:12.774 Deallocated Read Value: Unknown 00:28:12.774 Deallocate in Write Zeroes: Not Supported 00:28:12.774 Deallocated Guard Field: 0xFFFF 00:28:12.774 Flush: Supported 00:28:12.774 Reservation: Supported 00:28:12.774 Namespace Sharing Capabilities: Multiple Controllers 00:28:12.774 Size (in LBAs): 131072 (0GiB) 00:28:12.774 Capacity (in LBAs): 131072 (0GiB) 00:28:12.774 Utilization (in LBAs): 131072 (0GiB) 00:28:12.774 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:12.774 EUI64: ABCDEF0123456789 00:28:12.774 UUID: 16d5e9d2-0fd9-4a8c-aa03-6616a390ec18 00:28:12.774 Thin Provisioning: Not Supported 00:28:12.774 Per-NS Atomic Units: Yes 00:28:12.774 Atomic Boundary Size (Normal): 0 00:28:12.774 Atomic Boundary Size (PFail): 0 00:28:12.774 Atomic Boundary Offset: 0 00:28:12.774 Maximum Single Source Range Length: 65535 00:28:12.774 Maximum Copy Length: 65535 00:28:12.774 Maximum Source Range Count: 1 00:28:12.774 NGUID/EUI64 Never Reused: No 00:28:12.774 Namespace Write Protected: No 00:28:12.774 Number of LBA Formats: 1 00:28:12.774 Current LBA Format: LBA Format #00 00:28:12.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:12.774 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.774 rmmod nvme_tcp 00:28:12.774 rmmod nvme_fabrics 00:28:12.774 rmmod nvme_keyring 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 802440 ']' 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 802440 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 802440 ']' 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 802440 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.774 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 802440 00:28:13.036 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:13.036 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:13.036 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 802440' 00:28:13.036 killing process with pid 802440 00:28:13.036 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 802440 00:28:13.036 22:56:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 802440 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.036 22:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.582 00:28:15.582 real 0m11.959s 00:28:15.582 user 0m8.712s 00:28:15.582 sys 0m6.408s 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.582 ************************************ 00:28:15.582 END TEST nvmf_identify 00:28:15.582 ************************************ 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.582 ************************************ 00:28:15.582 START TEST nvmf_perf 00:28:15.582 ************************************ 00:28:15.582 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:15.582 * Looking for test storage... 00:28:15.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.583 --rc genhtml_branch_coverage=1 00:28:15.583 --rc genhtml_function_coverage=1 00:28:15.583 --rc genhtml_legend=1 00:28:15.583 --rc geninfo_all_blocks=1 00:28:15.583 --rc geninfo_unexecuted_blocks=1 00:28:15.583 00:28:15.583 ' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.583 --rc genhtml_branch_coverage=1 00:28:15.583 --rc genhtml_function_coverage=1 00:28:15.583 --rc genhtml_legend=1 00:28:15.583 --rc geninfo_all_blocks=1 00:28:15.583 --rc geninfo_unexecuted_blocks=1 00:28:15.583 00:28:15.583 ' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.583 --rc genhtml_branch_coverage=1 00:28:15.583 --rc genhtml_function_coverage=1 00:28:15.583 --rc genhtml_legend=1 00:28:15.583 --rc geninfo_all_blocks=1 00:28:15.583 --rc geninfo_unexecuted_blocks=1 00:28:15.583 00:28:15.583 ' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.583 --rc genhtml_branch_coverage=1 00:28:15.583 --rc genhtml_function_coverage=1 00:28:15.583 --rc genhtml_legend=1 00:28:15.583 --rc geninfo_all_blocks=1 00:28:15.583 --rc geninfo_unexecuted_blocks=1 00:28:15.583 00:28:15.583 ' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:15.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.583 22:56:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:23.729 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:23.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:23.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:23.730 Found net devices under 0000:31:00.0: cvl_0_0 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:23.730 Found net devices under 0000:31:00.1: cvl_0_1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.730 22:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:28:23.730 00:28:23.730 --- 10.0.0.2 ping statistics --- 00:28:23.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.730 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:28:23.730 00:28:23.730 --- 10.0.0.1 ping statistics --- 00:28:23.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.730 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=807131 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 807131 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 807131 ']' 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.730 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:23.730 [2024-09-30 22:56:50.226431] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:28:23.730 [2024-09-30 22:56:50.226504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.730 [2024-09-30 22:56:50.317243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.730 [2024-09-30 22:56:50.412806] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.730 [2024-09-30 22:56:50.412873] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.730 [2024-09-30 22:56:50.412882] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.730 [2024-09-30 22:56:50.412889] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.730 [2024-09-30 22:56:50.412909] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.730 [2024-09-30 22:56:50.413113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.730 [2024-09-30 22:56:50.413276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.730 [2024-09-30 22:56:50.413436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.730 [2024-09-30 22:56:50.413436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:24.303 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:24.875 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:24.875 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:24.875 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:24.875 22:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:25.137 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:25.137 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:25.137 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:25.137 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:25.137 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:25.397 [2024-09-30 22:56:52.192681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.397 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:25.658 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.658 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.658 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.658 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:25.918 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.179 [2024-09-30 22:56:52.979944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.179 22:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.179 22:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:26.441 22:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:26.441 22:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:26.441 22:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:27.826 Initializing NVMe Controllers 00:28:27.826 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:27.826 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:27.826 Initialization complete. Launching workers. 00:28:27.826 ======================================================== 00:28:27.826 Latency(us) 00:28:27.826 Device Information : IOPS MiB/s Average min max 00:28:27.826 PCIE (0000:65:00.0) NSID 1 from core 0: 78155.56 305.30 408.94 13.34 5609.07 00:28:27.826 ======================================================== 00:28:27.826 Total : 78155.56 305.30 408.94 13.34 5609.07 00:28:27.826 00:28:27.826 22:56:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.767 Initializing NVMe Controllers 00:28:28.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.767 Initialization complete. Launching workers. 00:28:28.767 ======================================================== 00:28:28.767 Latency(us) 00:28:28.767 Device Information : IOPS MiB/s Average min max 00:28:28.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11162.57 83.31 45920.55 00:28:28.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21812.53 6009.65 49844.00 00:28:28.767 ======================================================== 00:28:28.767 Total : 136.00 0.53 14764.76 83.31 49844.00 00:28:28.767 00:28:28.767 22:56:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.153 Initializing NVMe Controllers 00:28:30.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.153 Initialization complete. Launching workers. 00:28:30.153 ======================================================== 00:28:30.153 Latency(us) 00:28:30.153 Device Information : IOPS MiB/s Average min max 00:28:30.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11784.92 46.03 2716.08 393.62 8922.38 00:28:30.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3743.66 14.62 8574.37 4656.40 16264.69 00:28:30.153 ======================================================== 00:28:30.153 Total : 15528.57 60.66 4128.41 393.62 16264.69 00:28:30.153 00:28:30.153 22:56:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:30.153 22:56:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:30.153 22:56:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.833 Initializing NVMe Controllers 00:28:32.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.833 Controller IO queue size 128, less than required. 00:28:32.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.833 Controller IO queue size 128, less than required. 00:28:32.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.833 Initialization complete. Launching workers. 00:28:32.833 ======================================================== 00:28:32.833 Latency(us) 00:28:32.833 Device Information : IOPS MiB/s Average min max 00:28:32.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1891.11 472.78 68543.13 39328.60 113256.73 00:28:32.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.91 152.48 224749.35 79718.17 361755.93 00:28:32.833 ======================================================== 00:28:32.833 Total : 2501.02 625.26 106636.12 39328.60 361755.93 00:28:32.833 00:28:32.833 22:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:32.833 No valid NVMe controllers or AIO or URING devices found 00:28:32.833 Initializing NVMe Controllers 00:28:32.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.833 Controller IO queue size 128, less than required. 00:28:32.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:32.833 Controller IO queue size 128, less than required. 00:28:32.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:32.833 WARNING: Some requested NVMe devices were skipped 00:28:32.833 22:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:35.392 Initializing NVMe Controllers 00:28:35.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.392 Controller IO queue size 128, less than required. 00:28:35.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.392 Controller IO queue size 128, less than required. 00:28:35.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:35.392 Initialization complete. Launching workers. 00:28:35.392 00:28:35.392 ==================== 00:28:35.392 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:35.392 TCP transport: 00:28:35.392 polls: 36787 00:28:35.392 idle_polls: 21010 00:28:35.392 sock_completions: 15777 00:28:35.392 nvme_completions: 7001 00:28:35.392 submitted_requests: 10546 00:28:35.392 queued_requests: 1 00:28:35.392 00:28:35.392 ==================== 00:28:35.392 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:35.392 TCP transport: 00:28:35.392 polls: 34651 00:28:35.392 idle_polls: 20013 00:28:35.392 sock_completions: 14638 00:28:35.392 nvme_completions: 7027 00:28:35.392 submitted_requests: 10542 00:28:35.392 queued_requests: 1 00:28:35.392 ======================================================== 00:28:35.392 Latency(us) 00:28:35.392 Device Information : IOPS MiB/s Average min max 00:28:35.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1748.69 437.17 74756.78 39936.97 122564.34 00:28:35.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1755.18 438.80 72822.21 30040.29 123724.70 00:28:35.392 ======================================================== 00:28:35.392 Total : 3503.87 875.97 73787.70 30040.29 123724.70 00:28:35.392 00:28:35.392 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:35.392 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.651 rmmod nvme_tcp 00:28:35.651 rmmod nvme_fabrics 00:28:35.651 rmmod nvme_keyring 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 807131 ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 807131 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 807131 ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 807131 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 807131 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 807131' 00:28:35.651 killing process with pid 807131 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 807131 00:28:35.651 22:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 807131 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.563 22:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.108 00:28:40.108 real 0m24.392s 00:28:40.108 user 0m57.936s 00:28:40.108 sys 0m8.853s 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:40.108 ************************************ 00:28:40.108 END TEST nvmf_perf 00:28:40.108 ************************************ 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.108 ************************************ 00:28:40.108 START TEST nvmf_fio_host 00:28:40.108 ************************************ 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:40.108 * Looking for test storage... 00:28:40.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:40.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.108 --rc genhtml_branch_coverage=1 00:28:40.108 --rc genhtml_function_coverage=1 00:28:40.108 --rc genhtml_legend=1 00:28:40.108 --rc geninfo_all_blocks=1 00:28:40.108 --rc geninfo_unexecuted_blocks=1 00:28:40.108 00:28:40.108 ' 00:28:40.108 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:40.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.109 --rc genhtml_branch_coverage=1 00:28:40.109 --rc genhtml_function_coverage=1 00:28:40.109 --rc genhtml_legend=1 00:28:40.109 --rc geninfo_all_blocks=1 00:28:40.109 --rc geninfo_unexecuted_blocks=1 00:28:40.109 00:28:40.109 ' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:40.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.109 --rc genhtml_branch_coverage=1 00:28:40.109 --rc genhtml_function_coverage=1 00:28:40.109 --rc genhtml_legend=1 00:28:40.109 --rc geninfo_all_blocks=1 00:28:40.109 --rc geninfo_unexecuted_blocks=1 00:28:40.109 00:28:40.109 ' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:40.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.109 --rc genhtml_branch_coverage=1 00:28:40.109 --rc genhtml_function_coverage=1 00:28:40.109 --rc genhtml_legend=1 00:28:40.109 --rc geninfo_all_blocks=1 00:28:40.109 --rc geninfo_unexecuted_blocks=1 00:28:40.109 00:28:40.109 ' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:40.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:40.109 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:40.110 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.110 22:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.244 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:48.245 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:48.245 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:48.245 Found net devices under 0000:31:00.0: cvl_0_0 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:48.245 Found net devices under 0000:31:00.1: cvl_0_1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:28:48.245 00:28:48.245 --- 10.0.0.2 ping statistics --- 00:28:48.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.245 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:28:48.245 00:28:48.245 --- 10.0.0.1 ping statistics --- 00:28:48.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.245 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=814828 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 814828 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 814828 ']' 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.245 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.245 [2024-09-30 22:57:14.532261] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:28:48.245 [2024-09-30 22:57:14.532326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.245 [2024-09-30 22:57:14.625106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.245 [2024-09-30 22:57:14.692713] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.246 [2024-09-30 22:57:14.692751] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.246 [2024-09-30 22:57:14.692760] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.246 [2024-09-30 22:57:14.692767] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.246 [2024-09-30 22:57:14.692773] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.246 [2024-09-30 22:57:14.692820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.246 [2024-09-30 22:57:14.692923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.246 [2024-09-30 22:57:14.693010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.246 [2024-09-30 22:57:14.693011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.505 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.505 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:48.505 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:48.505 [2024-09-30 22:57:15.494479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.765 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:48.765 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.765 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.765 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:48.765 Malloc1 00:28:48.765 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.025 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.286 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.286 [2024-09-30 22:57:16.284303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:49.547 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:50.133 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:50.133 fio-3.35 00:28:50.133 Starting 1 thread 00:28:52.677 00:28:52.677 test: (groupid=0, jobs=1): err= 0: pid=815383: Mon Sep 30 22:57:19 2024 00:28:52.677 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2005msec) 00:28:52.677 slat (usec): min=2, max=263, avg= 2.15, stdev= 2.18 00:28:52.677 clat (usec): min=3514, max=8795, avg=5134.22, stdev=444.51 00:28:52.677 lat (usec): min=3548, max=8797, avg=5136.37, stdev=444.61 00:28:52.677 clat percentiles (usec): 00:28:52.677 | 1.00th=[ 4359], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:28:52.677 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:28:52.677 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:28:52.677 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8291], 00:28:52.677 | 99.99th=[ 8717] 00:28:52.677 bw ( KiB/s): min=52240, max=55840, per=99.99%, avg=54890.00, stdev=1767.37, samples=4 00:28:52.677 iops : min=13060, max=13960, avg=13722.50, stdev=441.84, samples=4 00:28:52.677 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2005msec); 0 zone resets 00:28:52.677 slat (usec): min=2, max=213, avg= 2.20, stdev= 1.44 00:28:52.677 clat (usec): min=2452, max=7792, avg=4139.77, stdev=376.53 00:28:52.677 lat (usec): min=2470, max=7794, avg=4141.97, stdev=376.67 00:28:52.677 clat percentiles (usec): 00:28:52.677 | 1.00th=[ 3490], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3884], 00:28:52.677 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:28:52.677 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:28:52.677 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6587], 99.95th=[ 7046], 00:28:52.677 | 99.99th=[ 7701] 00:28:52.677 bw ( KiB/s): min=52616, max=55688, per=100.00%, avg=54808.00, stdev=1469.85, samples=4 00:28:52.677 iops : min=13154, max=13922, avg=13702.00, stdev=367.46, samples=4 00:28:52.677 lat (msec) : 4=16.80%, 10=83.20% 00:28:52.677 cpu : usr=78.14%, sys=21.21%, ctx=28, majf=0, minf=18 00:28:52.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:52.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:52.677 issued rwts: total=27516,27460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:52.677 00:28:52.677 Run status group 0 (all jobs): 00:28:52.677 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2005-2005msec 00:28:52.677 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2005-2005msec 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.677 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:52.678 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.678 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:52.678 fio-3.35 00:28:52.678 Starting 1 thread 00:28:55.227 00:28:55.227 test: (groupid=0, jobs=1): err= 0: pid=816189: Mon Sep 30 22:57:21 2024 00:28:55.227 read: IOPS=9573, BW=150MiB/s (157MB/s)(300MiB/2004msec) 00:28:55.227 slat (usec): min=3, max=114, avg= 3.63, stdev= 1.67 00:28:55.227 clat (usec): min=2412, max=15893, avg=8114.05, stdev=1900.40 00:28:55.227 lat (usec): min=2415, max=15910, avg=8117.68, stdev=1900.58 00:28:55.227 clat percentiles (usec): 00:28:55.227 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6456], 00:28:55.227 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8586], 00:28:55.227 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:28:55.227 | 99.00th=[12780], 99.50th=[13829], 99.90th=[15270], 99.95th=[15401], 00:28:55.227 | 99.99th=[15926] 00:28:55.227 bw ( KiB/s): min=69696, max=82656, per=49.63%, avg=76024.00, stdev=5672.54, samples=4 00:28:55.227 iops : min= 4356, max= 5166, avg=4751.50, stdev=354.53, samples=4 00:28:55.227 write: IOPS=5599, BW=87.5MiB/s (91.7MB/s)(155MiB/1776msec); 0 zone resets 00:28:55.227 slat (usec): min=39, max=566, avg=41.18, stdev=10.64 00:28:55.227 clat (usec): min=3434, max=17746, avg=9090.82, stdev=1453.70 00:28:55.227 lat (usec): min=3474, max=17877, avg=9132.00, stdev=1457.09 00:28:55.227 clat percentiles (usec): 00:28:55.227 | 1.00th=[ 6063], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7898], 00:28:55.227 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9372], 00:28:55.227 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:28:55.227 | 99.00th=[12911], 99.50th=[15139], 99.90th=[17171], 99.95th=[17433], 00:28:55.227 | 99.99th=[17695] 00:28:55.227 bw ( KiB/s): min=72640, max=86176, per=88.26%, avg=79080.00, stdev=5920.28, samples=4 00:28:55.227 iops : min= 4540, max= 5386, avg=4942.50, stdev=370.02, samples=4 00:28:55.227 lat (msec) : 4=0.53%, 10=79.69%, 20=19.79% 00:28:55.227 cpu : usr=84.42%, sys=14.28%, ctx=17, majf=0, minf=30 00:28:55.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:55.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:55.227 issued rwts: total=19186,9945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:55.227 00:28:55.227 Run status group 0 (all jobs): 00:28:55.227 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=300MiB (314MB), run=2004-2004msec 00:28:55.227 WRITE: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=155MiB (163MB), run=1776-1776msec 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.227 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.227 rmmod nvme_tcp 00:28:55.227 rmmod nvme_fabrics 00:28:55.488 rmmod nvme_keyring 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 814828 ']' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 814828 ']' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814828' 00:28:55.488 killing process with pid 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 814828 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.488 22:57:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.035 00:28:58.035 real 0m17.873s 00:28:58.035 user 1m5.017s 00:28:58.035 sys 0m7.544s 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.035 ************************************ 00:28:58.035 END TEST nvmf_fio_host 00:28:58.035 ************************************ 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.035 ************************************ 00:28:58.035 START TEST nvmf_failover 00:28:58.035 ************************************ 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:58.035 * Looking for test storage... 00:28:58.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.035 --rc genhtml_branch_coverage=1 00:28:58.035 --rc genhtml_function_coverage=1 00:28:58.035 --rc genhtml_legend=1 00:28:58.035 --rc geninfo_all_blocks=1 00:28:58.035 --rc geninfo_unexecuted_blocks=1 00:28:58.035 00:28:58.035 ' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.035 --rc genhtml_branch_coverage=1 00:28:58.035 --rc genhtml_function_coverage=1 00:28:58.035 --rc genhtml_legend=1 00:28:58.035 --rc geninfo_all_blocks=1 00:28:58.035 --rc geninfo_unexecuted_blocks=1 00:28:58.035 00:28:58.035 ' 00:28:58.035 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.035 --rc genhtml_branch_coverage=1 00:28:58.035 --rc genhtml_function_coverage=1 00:28:58.035 --rc genhtml_legend=1 00:28:58.036 --rc geninfo_all_blocks=1 00:28:58.036 --rc geninfo_unexecuted_blocks=1 00:28:58.036 00:28:58.036 ' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.036 --rc genhtml_branch_coverage=1 00:28:58.036 --rc genhtml_function_coverage=1 00:28:58.036 --rc genhtml_legend=1 00:28:58.036 --rc geninfo_all_blocks=1 00:28:58.036 --rc geninfo_unexecuted_blocks=1 00:28:58.036 00:28:58.036 ' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.036 22:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:06.177 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:06.177 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:06.177 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:06.178 Found net devices under 0000:31:00.0: cvl_0_0 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:06.178 Found net devices under 0000:31:00.1: cvl_0_1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:29:06.178 00:29:06.178 --- 10.0.0.2 ping statistics --- 00:29:06.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.178 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:29:06.178 00:29:06.178 --- 10.0.0.1 ping statistics --- 00:29:06.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.178 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=820918 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 820918 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 820918 ']' 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.178 22:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:06.178 [2024-09-30 22:57:32.654523] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:29:06.178 [2024-09-30 22:57:32.654595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.178 [2024-09-30 22:57:32.745244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.178 [2024-09-30 22:57:32.840792] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.178 [2024-09-30 22:57:32.840852] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.178 [2024-09-30 22:57:32.840860] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.178 [2024-09-30 22:57:32.840868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.178 [2024-09-30 22:57:32.840874] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.178 [2024-09-30 22:57:32.841041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.178 [2024-09-30 22:57:32.841305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.178 [2024-09-30 22:57:32.841307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:06.752 [2024-09-30 22:57:33.672221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.752 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:07.013 Malloc0 00:29:07.013 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.274 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.536 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.536 [2024-09-30 22:57:34.504840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.536 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:07.797 [2024-09-30 22:57:34.701424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:07.797 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:08.059 [2024-09-30 22:57:34.894168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=821298 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 821298 /var/tmp/bdevperf.sock 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 821298 ']' 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.060 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:09.003 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.003 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:09.003 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.263 NVMe0n1 00:29:09.263 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.263 00:29:09.522 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=821626 00:29:09.522 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:09.522 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.465 22:57:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.465 [2024-09-30 22:57:37.447433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.465 [2024-09-30 22:57:37.447779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 [2024-09-30 22:57:37.447864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c650 is same with the state(6) to be set 00:29:10.466 22:57:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:13.763 22:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:14.023 00:29:14.023 22:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:14.023 [2024-09-30 22:57:41.022145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.023 [2024-09-30 22:57:41.022392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d400 is same with the state(6) to be set 00:29:14.283 22:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:17.577 22:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.577 [2024-09-30 22:57:44.212023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.577 22:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:18.520 22:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:18.520 [2024-09-30 22:57:45.404808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.404997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.520 [2024-09-30 22:57:45.405233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 [2024-09-30 22:57:45.405280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e350 is same with the state(6) to be set 00:29:18.521 22:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 821626 00:29:25.114 { 00:29:25.114 "results": [ 00:29:25.114 { 00:29:25.114 "job": "NVMe0n1", 00:29:25.114 "core_mask": "0x1", 00:29:25.114 "workload": "verify", 00:29:25.114 "status": "finished", 00:29:25.114 "verify_range": { 00:29:25.114 "start": 0, 00:29:25.114 "length": 16384 00:29:25.114 }, 00:29:25.114 "queue_depth": 128, 00:29:25.114 "io_size": 4096, 00:29:25.114 "runtime": 15.007116, 00:29:25.114 "iops": 12465.886183594503, 00:29:25.114 "mibps": 48.694867904666026, 00:29:25.114 "io_failed": 6365, 00:29:25.114 "io_timeout": 0, 00:29:25.114 "avg_latency_us": 9908.991183024713, 00:29:25.114 "min_latency_us": 399.36, 00:29:25.114 "max_latency_us": 20753.066666666666 00:29:25.114 } 00:29:25.114 ], 00:29:25.114 "core_count": 1 00:29:25.114 } 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 821298 ']' 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821298' 00:29:25.114 killing process with pid 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 821298 00:29:25.114 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.114 [2024-09-30 22:57:34.975341] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:29:25.114 [2024-09-30 22:57:34.975424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821298 ] 00:29:25.114 [2024-09-30 22:57:35.060996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.114 [2024-09-30 22:57:35.157074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.114 Running I/O for 15 seconds... 00:29:25.114 11812.00 IOPS, 46.14 MiB/s [2024-09-30 22:57:37.448702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.114 [2024-09-30 22:57:37.448736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.114 [2024-09-30 22:57:37.448753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.114 [2024-09-30 22:57:37.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.114 [2024-09-30 22:57:37.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.448988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.448996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.115 [2024-09-30 22:57:37.449439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.115 [2024-09-30 22:57:37.449456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.115 [2024-09-30 22:57:37.449466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.115 [2024-09-30 22:57:37.449473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.116 [2024-09-30 22:57:37.449952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.449986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.449996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.116 [2024-09-30 22:57:37.450124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.116 [2024-09-30 22:57:37.450134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.117 [2024-09-30 22:57:37.450262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.117 [2024-09-30 22:57:37.450789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.117 [2024-09-30 22:57:37.450812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.450820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103488 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.450827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.450838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.450845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.450851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103496 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.450858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.450866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.450872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.450878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103504 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.450887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.450939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.450946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.450952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103512 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.450959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.450967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.450972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.450979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103520 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.450987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.450995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.451001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.451006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103528 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.451022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.451028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.451034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103536 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.451041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.451049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.118 [2024-09-30 22:57:37.451054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.118 [2024-09-30 22:57:37.451060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103544 len:8 PRP1 0x0 PRP2 0x0 00:29:25.118 [2024-09-30 22:57:37.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.451105] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b1810 was disconnected and freed. reset controller. 00:29:25.118 [2024-09-30 22:57:37.451115] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:25.118 [2024-09-30 22:57:37.451137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.118 [2024-09-30 22:57:37.451145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.451154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.118 [2024-09-30 22:57:37.451161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.451169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.118 [2024-09-30 22:57:37.451177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.462909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.118 [2024-09-30 22:57:37.462939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:37.462950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.118 [2024-09-30 22:57:37.463007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290fe0 (9): Bad file descriptor 00:29:25.118 [2024-09-30 22:57:37.466514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.118 [2024-09-30 22:57:37.512317] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.118 11293.00 IOPS, 44.11 MiB/s 11233.00 IOPS, 43.88 MiB/s 11564.50 IOPS, 45.17 MiB/s [2024-09-30 22:57:41.024638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.118 [2024-09-30 22:57:41.024784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.118 [2024-09-30 22:57:41.024888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.118 [2024-09-30 22:57:41.024896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.024991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.024998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.119 [2024-09-30 22:57:41.025290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.119 [2024-09-30 22:57:41.025296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.120 [2024-09-30 22:57:41.025739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.120 [2024-09-30 22:57:41.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.121 [2024-09-30 22:57:41.025919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.025941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.025947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.025959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.025963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.025968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.025978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.025982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.025987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.025993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.025997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:29:25.121 [2024-09-30 22:57:41.026200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.121 [2024-09-30 22:57:41.026206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.121 [2024-09-30 22:57:41.026210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.121 [2024-09-30 22:57:41.026214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56008 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.026340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.026345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.026350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.026355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.122 [2024-09-30 22:57:41.039248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.122 [2024-09-30 22:57:41.039276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:29:25.122 [2024-09-30 22:57:41.039290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.039334] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b3780 was disconnected and freed. reset controller. 00:29:25.122 [2024-09-30 22:57:41.039345] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:25.122 [2024-09-30 22:57:41.039371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.122 [2024-09-30 22:57:41.039380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.039389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.122 [2024-09-30 22:57:41.039396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.039404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.122 [2024-09-30 22:57:41.039411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.039420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.122 [2024-09-30 22:57:41.039428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:41.039435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.122 [2024-09-30 22:57:41.039463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290fe0 (9): Bad file descriptor 00:29:25.122 [2024-09-30 22:57:41.042699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.122 [2024-09-30 22:57:41.077713] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.122 11696.60 IOPS, 45.69 MiB/s 11939.50 IOPS, 46.64 MiB/s 12082.57 IOPS, 47.20 MiB/s 12163.88 IOPS, 47.52 MiB/s 12241.56 IOPS, 47.82 MiB/s [2024-09-30 22:57:45.406007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.122 [2024-09-30 22:57:45.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.122 [2024-09-30 22:57:45.406252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.123 [2024-09-30 22:57:45.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.123 [2024-09-30 22:57:45.406655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.123 [2024-09-30 22:57:45.406660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.406992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.406998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.124 [2024-09-30 22:57:45.407107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.124 [2024-09-30 22:57:45.407114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.125 [2024-09-30 22:57:45.407210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121488 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121496 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121504 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121528 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121536 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121544 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121552 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121560 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121568 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121576 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121584 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121592 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121600 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121608 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121616 len:8 PRP1 0x0 PRP2 0x0 00:29:25.125 [2024-09-30 22:57:45.407554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.125 [2024-09-30 22:57:45.407559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.125 [2024-09-30 22:57:45.407563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.125 [2024-09-30 22:57:45.407567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121624 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.407573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.407578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.407582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.407586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121632 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.407591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.407596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.407600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121640 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.407609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.407616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.407620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121648 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121656 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120864 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120872 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120880 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120888 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120896 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120904 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.126 [2024-09-30 22:57:45.420928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.126 [2024-09-30 22:57:45.420932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120912 len:8 PRP1 0x0 PRP2 0x0 00:29:25.126 [2024-09-30 22:57:45.420938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.420974] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c0590 was disconnected and freed. reset controller. 00:29:25.126 [2024-09-30 22:57:45.420982] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:25.126 [2024-09-30 22:57:45.421005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.126 [2024-09-30 22:57:45.421012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.421019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.126 [2024-09-30 22:57:45.421025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.421031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.126 [2024-09-30 22:57:45.421036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.421041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.126 [2024-09-30 22:57:45.421049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.126 [2024-09-30 22:57:45.421055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.126 [2024-09-30 22:57:45.421087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290fe0 (9): Bad file descriptor 00:29:25.126 [2024-09-30 22:57:45.423642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.126 [2024-09-30 22:57:45.490348] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.126 12203.10 IOPS, 47.67 MiB/s 12272.27 IOPS, 47.94 MiB/s 12330.25 IOPS, 48.17 MiB/s 12390.46 IOPS, 48.40 MiB/s 12428.79 IOPS, 48.55 MiB/s 12464.40 IOPS, 48.69 MiB/s 00:29:25.126 Latency(us) 00:29:25.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.126 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.126 Verification LBA range: start 0x0 length 0x4000 00:29:25.126 NVMe0n1 : 15.01 12465.89 48.69 424.13 0.00 9908.99 399.36 20753.07 00:29:25.126 =================================================================================================================== 00:29:25.126 Total : 12465.89 48.69 424.13 0.00 9908.99 399.36 20753.07 00:29:25.126 Received shutdown signal, test time was about 15.000000 seconds 00:29:25.126 00:29:25.126 Latency(us) 00:29:25.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.126 =================================================================================================================== 00:29:25.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=824638 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 824638 /var/tmp/bdevperf.sock 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 824638 ']' 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.126 22:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.699 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.699 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:25.699 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:25.699 [2024-09-30 22:57:52.643072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:25.699 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:25.960 [2024-09-30 22:57:52.827508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:25.960 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:26.221 NVMe0n1 00:29:26.221 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:26.482 00:29:26.482 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:26.742 00:29:26.742 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:26.742 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:27.003 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:27.003 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:30.303 22:57:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:30.303 22:57:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:30.303 22:57:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=825652 00:29:30.303 22:57:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:30.303 22:57:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 825652 00:29:31.686 { 00:29:31.686 "results": [ 00:29:31.686 { 00:29:31.686 "job": "NVMe0n1", 00:29:31.686 "core_mask": "0x1", 00:29:31.686 "workload": "verify", 00:29:31.686 "status": "finished", 00:29:31.686 "verify_range": { 00:29:31.686 "start": 0, 00:29:31.686 "length": 16384 00:29:31.686 }, 00:29:31.686 "queue_depth": 128, 00:29:31.686 "io_size": 4096, 00:29:31.686 "runtime": 1.044908, 00:29:31.686 "iops": 12595.367247642855, 00:29:31.686 "mibps": 49.2006533111049, 00:29:31.686 "io_failed": 0, 00:29:31.686 "io_timeout": 0, 00:29:31.686 "avg_latency_us": 9744.678416533698, 00:29:31.686 "min_latency_us": 1501.8666666666666, 00:29:31.686 "max_latency_us": 41943.04 00:29:31.686 } 00:29:31.686 ], 00:29:31.686 "core_count": 1 00:29:31.686 } 00:29:31.686 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:31.686 [2024-09-30 22:57:51.687101] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:29:31.686 [2024-09-30 22:57:51.687159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824638 ] 00:29:31.686 [2024-09-30 22:57:51.765801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.686 [2024-09-30 22:57:51.819007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.686 [2024-09-30 22:57:53.965172] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:31.686 [2024-09-30 22:57:53.965210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.686 [2024-09-30 22:57:53.965219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.686 [2024-09-30 22:57:53.965227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.686 [2024-09-30 22:57:53.965233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.686 [2024-09-30 22:57:53.965240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.686 [2024-09-30 22:57:53.965245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.686 [2024-09-30 22:57:53.965251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.686 [2024-09-30 22:57:53.965256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.686 [2024-09-30 22:57:53.965262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.686 [2024-09-30 22:57:53.965283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.686 [2024-09-30 22:57:53.965294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5ffe0 (9): Bad file descriptor 00:29:31.686 [2024-09-30 22:57:53.968561] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:31.686 Running I/O for 1 seconds... 00:29:31.686 13033.00 IOPS, 50.91 MiB/s 00:29:31.686 Latency(us) 00:29:31.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.686 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:31.686 Verification LBA range: start 0x0 length 0x4000 00:29:31.686 NVMe0n1 : 1.04 12595.37 49.20 0.00 0.00 9744.68 1501.87 41943.04 00:29:31.686 =================================================================================================================== 00:29:31.686 Total : 12595.37 49.20 0.00 0.00 9744.68 1501.87 41943.04 00:29:31.686 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:31.687 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:31.687 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:31.947 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:31.947 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:31.947 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:32.207 22:57:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 824638 ']' 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 824638' 00:29:35.509 killing process with pid 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 824638 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:35.509 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.842 rmmod nvme_tcp 00:29:35.842 rmmod nvme_fabrics 00:29:35.842 rmmod nvme_keyring 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 820918 ']' 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 820918 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 820918 ']' 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 820918 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 820918 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 820918' 00:29:35.842 killing process with pid 820918 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 820918 00:29:35.842 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 820918 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.180 22:58:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.124 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.124 00:29:38.124 real 0m40.343s 00:29:38.124 user 2m2.914s 00:29:38.124 sys 0m9.015s 00:29:38.124 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:38.124 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:38.124 ************************************ 00:29:38.124 END TEST nvmf_failover 00:29:38.124 ************************************ 00:29:38.124 22:58:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:38.124 22:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:38.124 22:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.124 22:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.124 ************************************ 00:29:38.124 START TEST nvmf_host_discovery 00:29:38.124 ************************************ 00:29:38.124 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:38.385 * Looking for test storage... 00:29:38.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.385 --rc genhtml_branch_coverage=1 00:29:38.385 --rc genhtml_function_coverage=1 00:29:38.385 --rc genhtml_legend=1 00:29:38.385 --rc geninfo_all_blocks=1 00:29:38.385 --rc geninfo_unexecuted_blocks=1 00:29:38.385 00:29:38.385 ' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.385 --rc genhtml_branch_coverage=1 00:29:38.385 --rc genhtml_function_coverage=1 00:29:38.385 --rc genhtml_legend=1 00:29:38.385 --rc geninfo_all_blocks=1 00:29:38.385 --rc geninfo_unexecuted_blocks=1 00:29:38.385 00:29:38.385 ' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.385 --rc genhtml_branch_coverage=1 00:29:38.385 --rc genhtml_function_coverage=1 00:29:38.385 --rc genhtml_legend=1 00:29:38.385 --rc geninfo_all_blocks=1 00:29:38.385 --rc geninfo_unexecuted_blocks=1 00:29:38.385 00:29:38.385 ' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.385 --rc genhtml_branch_coverage=1 00:29:38.385 --rc genhtml_function_coverage=1 00:29:38.385 --rc genhtml_legend=1 00:29:38.385 --rc geninfo_all_blocks=1 00:29:38.385 --rc geninfo_unexecuted_blocks=1 00:29:38.385 00:29:38.385 ' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.385 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.386 22:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.521 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:46.522 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:46.522 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:46.522 Found net devices under 0000:31:00.0: cvl_0_0 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:46.522 Found net devices under 0000:31:00.1: cvl_0_1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:29:46.522 00:29:46.522 --- 10.0.0.2 ping statistics --- 00:29:46.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.522 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:29:46.522 00:29:46.522 --- 10.0.0.1 ping statistics --- 00:29:46.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.522 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:46.522 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:46.522 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:46.522 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:46.522 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=831061 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 831061 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 831061 ']' 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.523 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 [2024-09-30 22:58:13.088819] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:29:46.523 [2024-09-30 22:58:13.088885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.523 [2024-09-30 22:58:13.182318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.523 [2024-09-30 22:58:13.274552] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.523 [2024-09-30 22:58:13.274615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.523 [2024-09-30 22:58:13.274623] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.523 [2024-09-30 22:58:13.274630] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.523 [2024-09-30 22:58:13.274636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.523 [2024-09-30 22:58:13.274667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 [2024-09-30 22:58:13.968037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 [2024-09-30 22:58:13.980284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 null0 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.093 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 null1 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=831284 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 831284 /tmp/host.sock 00:29:47.093 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 831284 ']' 00:29:47.094 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:29:47.094 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.094 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:47.094 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:47.094 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.094 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.094 [2024-09-30 22:58:14.085649] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:29:47.094 [2024-09-30 22:58:14.085716] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831284 ] 00:29:47.353 [2024-09-30 22:58:14.155328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.353 [2024-09-30 22:58:14.251779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:47.923 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.207 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:48.207 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.208 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 [2024-09-30 22:58:15.239567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:29:48.469 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:49.037 [2024-09-30 22:58:15.969112] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:49.037 [2024-09-30 22:58:15.969148] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:49.037 [2024-09-30 22:58:15.969163] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:49.296 [2024-09-30 22:58:16.058426] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:49.296 [2024-09-30 22:58:16.118380] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:49.296 [2024-09-30 22:58:16.118416] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:49.556 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:49.816 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:49.816 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:49.816 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:49.816 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:49.817 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:50.076 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.077 [2024-09-30 22:58:16.984077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:50.077 [2024-09-30 22:58:16.984713] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:50.077 [2024-09-30 22:58:16.984740] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.077 22:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.077 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.337 [2024-09-30 22:58:17.114374] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:50.337 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:50.337 22:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:50.596 [2024-09-30 22:58:17.374817] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:50.597 [2024-09-30 22:58:17.374836] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:50.597 [2024-09-30 22:58:17.374842] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.166 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.426 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.427 [2024-09-30 22:58:18.236031] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:51.427 [2024-09-30 22:58:18.236049] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:51.427 [2024-09-30 22:58:18.239390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.427 [2024-09-30 22:58:18.239404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.427 [2024-09-30 22:58:18.239411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.427 [2024-09-30 22:58:18.239417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.427 [2024-09-30 22:58:18.239422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.427 [2024-09-30 22:58:18.239427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.427 [2024-09-30 22:58:18.239433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.427 [2024-09-30 22:58:18.239438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.427 [2024-09-30 22:58:18.239444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.427 [2024-09-30 22:58:18.249406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.427 [2024-09-30 22:58:18.259441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.427 [2024-09-30 22:58:18.259653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.427 [2024-09-30 22:58:18.259664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.427 [2024-09-30 22:58:18.259670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.427 [2024-09-30 22:58:18.259678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.427 [2024-09-30 22:58:18.259686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.427 [2024-09-30 22:58:18.259692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.427 [2024-09-30 22:58:18.259698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.427 [2024-09-30 22:58:18.259706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.427 [2024-09-30 22:58:18.269489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.427 [2024-09-30 22:58:18.269784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.427 [2024-09-30 22:58:18.269793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.427 [2024-09-30 22:58:18.269798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.427 [2024-09-30 22:58:18.269806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.427 [2024-09-30 22:58:18.269813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.427 [2024-09-30 22:58:18.269817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.427 [2024-09-30 22:58:18.269822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.427 [2024-09-30 22:58:18.269830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.427 [2024-09-30 22:58:18.279532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.427 [2024-09-30 22:58:18.279827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.427 [2024-09-30 22:58:18.279836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.427 [2024-09-30 22:58:18.279841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.427 [2024-09-30 22:58:18.279849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.427 [2024-09-30 22:58:18.279857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.427 [2024-09-30 22:58:18.279861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.427 [2024-09-30 22:58:18.279866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.427 [2024-09-30 22:58:18.279877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.427 [2024-09-30 22:58:18.289577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.427 [2024-09-30 22:58:18.289878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.427 [2024-09-30 22:58:18.289888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.427 [2024-09-30 22:58:18.289897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.427 [2024-09-30 22:58:18.289905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.427 [2024-09-30 22:58:18.289913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.427 [2024-09-30 22:58:18.289917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.427 [2024-09-30 22:58:18.289922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.427 [2024-09-30 22:58:18.289930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.427 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.427 [2024-09-30 22:58:18.299624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.427 [2024-09-30 22:58:18.299931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.427 [2024-09-30 22:58:18.299951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.428 [2024-09-30 22:58:18.299957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.428 [2024-09-30 22:58:18.299967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.428 [2024-09-30 22:58:18.299976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.428 [2024-09-30 22:58:18.299981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.428 [2024-09-30 22:58:18.299986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.428 [2024-09-30 22:58:18.299994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.428 [2024-09-30 22:58:18.309670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.428 [2024-09-30 22:58:18.310164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.428 [2024-09-30 22:58:18.310195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.428 [2024-09-30 22:58:18.310204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.428 [2024-09-30 22:58:18.310218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.428 [2024-09-30 22:58:18.310227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.428 [2024-09-30 22:58:18.310232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.428 [2024-09-30 22:58:18.310238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.428 [2024-09-30 22:58:18.310249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.428 [2024-09-30 22:58:18.319716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:51.428 [2024-09-30 22:58:18.320122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.428 [2024-09-30 22:58:18.320152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6fd0 with addr=10.0.0.2, port=4420 00:29:51.428 [2024-09-30 22:58:18.320161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6fd0 is same with the state(6) to be set 00:29:51.428 [2024-09-30 22:58:18.320174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6fd0 (9): Bad file descriptor 00:29:51.428 [2024-09-30 22:58:18.320183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.428 [2024-09-30 22:58:18.320188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:51.428 [2024-09-30 22:58:18.320194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.428 [2024-09-30 22:58:18.320205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.428 [2024-09-30 22:58:18.323305] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:51.428 [2024-09-30 22:58:18.323319] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.428 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:51.688 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.689 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.071 [2024-09-30 22:58:19.657036] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:53.071 [2024-09-30 22:58:19.657050] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:53.071 [2024-09-30 22:58:19.657059] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:53.071 [2024-09-30 22:58:19.745325] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:53.071 [2024-09-30 22:58:19.852095] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:53.071 [2024-09-30 22:58:19.852121] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.071 request: 00:29:53.071 { 00:29:53.071 "name": "nvme", 00:29:53.071 "trtype": "tcp", 00:29:53.071 "traddr": "10.0.0.2", 00:29:53.071 "adrfam": "ipv4", 00:29:53.071 "trsvcid": "8009", 00:29:53.071 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:53.071 "wait_for_attach": true, 00:29:53.071 "method": "bdev_nvme_start_discovery", 00:29:53.071 "req_id": 1 00:29:53.071 } 00:29:53.071 Got JSON-RPC error response 00:29:53.071 response: 00:29:53.071 { 00:29:53.071 "code": -17, 00:29:53.071 "message": "File exists" 00:29:53.071 } 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.071 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.072 request: 00:29:53.072 { 00:29:53.072 "name": "nvme_second", 00:29:53.072 "trtype": "tcp", 00:29:53.072 "traddr": "10.0.0.2", 00:29:53.072 "adrfam": "ipv4", 00:29:53.072 "trsvcid": "8009", 00:29:53.072 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:53.072 "wait_for_attach": true, 00:29:53.072 "method": "bdev_nvme_start_discovery", 00:29:53.072 "req_id": 1 00:29:53.072 } 00:29:53.072 Got JSON-RPC error response 00:29:53.072 response: 00:29:53.072 { 00:29:53.072 "code": -17, 00:29:53.072 "message": "File exists" 00:29:53.072 } 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:53.072 22:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:53.072 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.333 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.275 [2024-09-30 22:58:21.108119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-09-30 22:58:21.108158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f8ac0 with addr=10.0.0.2, port=8010 00:29:54.275 [2024-09-30 22:58:21.108173] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:54.275 [2024-09-30 22:58:21.108179] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:54.275 [2024-09-30 22:58:21.108185] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:55.216 [2024-09-30 22:58:22.110286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.216 [2024-09-30 22:58:22.110308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f8ac0 with addr=10.0.0.2, port=8010 00:29:55.216 [2024-09-30 22:58:22.110317] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:55.216 [2024-09-30 22:58:22.110323] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:55.216 [2024-09-30 22:58:22.110328] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:56.159 [2024-09-30 22:58:23.112266] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:56.159 request: 00:29:56.159 { 00:29:56.159 "name": "nvme_second", 00:29:56.159 "trtype": "tcp", 00:29:56.159 "traddr": "10.0.0.2", 00:29:56.159 "adrfam": "ipv4", 00:29:56.159 "trsvcid": "8010", 00:29:56.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:56.159 "wait_for_attach": false, 00:29:56.159 "attach_timeout_ms": 3000, 00:29:56.159 "method": "bdev_nvme_start_discovery", 00:29:56.159 "req_id": 1 00:29:56.159 } 00:29:56.159 Got JSON-RPC error response 00:29:56.159 response: 00:29:56.159 { 00:29:56.159 "code": -110, 00:29:56.159 "message": "Connection timed out" 00:29:56.159 } 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 831284 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:56.159 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:56.160 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:56.160 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.160 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:56.160 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.160 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.420 rmmod nvme_tcp 00:29:56.420 rmmod nvme_fabrics 00:29:56.420 rmmod nvme_keyring 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 831061 ']' 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 831061 ']' 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 831061' 00:29:56.420 killing process with pid 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 831061 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:56.420 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:29:56.681 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.681 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.681 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.681 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.681 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.594 00:29:58.594 real 0m20.437s 00:29:58.594 user 0m23.376s 00:29:58.594 sys 0m7.348s 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.594 ************************************ 00:29:58.594 END TEST nvmf_host_discovery 00:29:58.594 ************************************ 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.594 ************************************ 00:29:58.594 START TEST nvmf_host_multipath_status 00:29:58.594 ************************************ 00:29:58.594 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:58.855 * Looking for test storage... 00:29:58.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.855 --rc genhtml_branch_coverage=1 00:29:58.855 --rc genhtml_function_coverage=1 00:29:58.855 --rc genhtml_legend=1 00:29:58.855 --rc geninfo_all_blocks=1 00:29:58.855 --rc geninfo_unexecuted_blocks=1 00:29:58.855 00:29:58.855 ' 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.855 --rc genhtml_branch_coverage=1 00:29:58.855 --rc genhtml_function_coverage=1 00:29:58.855 --rc genhtml_legend=1 00:29:58.855 --rc geninfo_all_blocks=1 00:29:58.855 --rc geninfo_unexecuted_blocks=1 00:29:58.855 00:29:58.855 ' 00:29:58.855 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.855 --rc genhtml_branch_coverage=1 00:29:58.855 --rc genhtml_function_coverage=1 00:29:58.855 --rc genhtml_legend=1 00:29:58.855 --rc geninfo_all_blocks=1 00:29:58.856 --rc geninfo_unexecuted_blocks=1 00:29:58.856 00:29:58.856 ' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:58.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.856 --rc genhtml_branch_coverage=1 00:29:58.856 --rc genhtml_function_coverage=1 00:29:58.856 --rc genhtml_legend=1 00:29:58.856 --rc geninfo_all_blocks=1 00:29:58.856 --rc geninfo_unexecuted_blocks=1 00:29:58.856 00:29:58.856 ' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.856 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:06.998 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:06.998 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:06.998 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:06.999 Found net devices under 0000:31:00.0: cvl_0_0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:06.999 Found net devices under 0000:31:00.1: cvl_0_1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:30:06.999 00:30:06.999 --- 10.0.0.2 ping statistics --- 00:30:06.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.999 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:30:06.999 00:30:06.999 --- 10.0.0.1 ping statistics --- 00:30:06.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.999 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=837452 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 837452 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 837452 ']' 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:06.999 [2024-09-30 22:58:33.660246] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:30:06.999 [2024-09-30 22:58:33.660311] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.999 [2024-09-30 22:58:33.726441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:06.999 [2024-09-30 22:58:33.813801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.999 [2024-09-30 22:58:33.813862] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.999 [2024-09-30 22:58:33.813868] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.999 [2024-09-30 22:58:33.813873] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.999 [2024-09-30 22:58:33.813877] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.999 [2024-09-30 22:58:33.813979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.999 [2024-09-30 22:58:33.814024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=837452 00:30:06.999 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.261 [2024-09-30 22:58:34.135486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.261 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:07.522 Malloc0 00:30:07.522 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:07.783 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:07.783 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.044 [2024-09-30 22:58:34.942952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:08.306 [2024-09-30 22:58:35.139447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=837717 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 837717 /var/tmp/bdevperf.sock 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 837717 ']' 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.306 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:09.249 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.249 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:09.249 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:09.249 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:09.821 Nvme0n1 00:30:09.822 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:10.395 Nvme0n1 00:30:10.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:10.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:12.310 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:12.310 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:12.570 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:12.570 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.956 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:14.217 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.217 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:14.217 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.217 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:14.477 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.477 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.478 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:14.738 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.738 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:14.738 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:14.998 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:15.259 22:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:16.199 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:16.199 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:16.199 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.199 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.460 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:16.720 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.720 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:16.720 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.720 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.980 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:17.240 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.240 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:17.240 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:17.500 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:17.500 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.885 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.146 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.146 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:19.146 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.146 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.406 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.406 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.406 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.406 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:19.667 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:19.927 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:20.187 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:21.128 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:21.128 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:21.128 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.128 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.391 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.651 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.651 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.651 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.651 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:21.910 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.910 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:21.910 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.910 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.170 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.170 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:22.170 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.170 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.170 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.170 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:22.170 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:22.431 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:22.691 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:23.630 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:23.630 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:23.630 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.630 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.891 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.151 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.151 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.151 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.151 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.412 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:24.672 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.672 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:24.672 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:24.932 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:25.192 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:26.134 22:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:26.134 22:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:26.134 22:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.134 22:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.395 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:26.655 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.655 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:26.655 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.655 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.915 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:27.176 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.176 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:27.444 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:27.444 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:27.444 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:27.758 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:28.748 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:28.748 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:28.748 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.748 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.009 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:29.269 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.269 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:29.269 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.269 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:29.530 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.530 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:29.530 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:29.530 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:29.790 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:30.051 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:30.312 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:31.254 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:31.254 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:31.254 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.254 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.515 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.777 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.777 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.777 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.777 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:32.038 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.038 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:32.038 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.038 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:32.038 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.038 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:32.038 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.038 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:32.300 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.300 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:32.300 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:32.561 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:32.823 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:33.766 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:33.766 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.766 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.766 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.766 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.767 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.027 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.287 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.287 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.287 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.287 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.548 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.809 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.809 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:34.809 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:35.071 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:35.071 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.454 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:36.714 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.714 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:36.714 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.714 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:36.974 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.974 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:36.974 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.974 22:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 837717 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 837717 ']' 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 837717 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.234 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837717 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837717' 00:30:37.498 killing process with pid 837717 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 837717 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 837717 00:30:37.498 { 00:30:37.498 "results": [ 00:30:37.498 { 00:30:37.498 "job": "Nvme0n1", 00:30:37.498 "core_mask": "0x4", 00:30:37.498 "workload": "verify", 00:30:37.498 "status": "terminated", 00:30:37.498 "verify_range": { 00:30:37.498 "start": 0, 00:30:37.498 "length": 16384 00:30:37.498 }, 00:30:37.498 "queue_depth": 128, 00:30:37.498 "io_size": 4096, 00:30:37.498 "runtime": 26.973196, 00:30:37.498 "iops": 11969.994212031826, 00:30:37.498 "mibps": 46.75778989074932, 00:30:37.498 "io_failed": 0, 00:30:37.498 "io_timeout": 0, 00:30:37.498 "avg_latency_us": 10674.467275995323, 00:30:37.498 "min_latency_us": 283.3066666666667, 00:30:37.498 "max_latency_us": 3019898.88 00:30:37.498 } 00:30:37.498 ], 00:30:37.498 "core_count": 1 00:30:37.498 } 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 837717 00:30:37.498 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:37.498 [2024-09-30 22:58:35.217387] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:30:37.498 [2024-09-30 22:58:35.217468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837717 ] 00:30:37.498 [2024-09-30 22:58:35.302834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.498 [2024-09-30 22:58:35.395206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.498 [2024-09-30 22:58:37.074941] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:30:37.498 Running I/O for 90 seconds... 00:30:37.499 10148.00 IOPS, 39.64 MiB/s 10676.00 IOPS, 41.70 MiB/s 10989.33 IOPS, 42.93 MiB/s 11502.00 IOPS, 44.93 MiB/s 11793.80 IOPS, 46.07 MiB/s 12029.67 IOPS, 46.99 MiB/s 12168.86 IOPS, 47.53 MiB/s 12251.88 IOPS, 47.86 MiB/s 12314.44 IOPS, 48.10 MiB/s 12370.10 IOPS, 48.32 MiB/s 12418.27 IOPS, 48.51 MiB/s [2024-09-30 22:58:49.280532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.280985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.280995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.499 [2024-09-30 22:58:49.281271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:37.499 [2024-09-30 22:58:49.281281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.281812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.281817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.282689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.500 [2024-09-30 22:58:49.282697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.282715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.500 [2024-09-30 22:58:49.282720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:37.500 [2024-09-30 22:58:49.282736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.500 [2024-09-30 22:58:49.282741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.282988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.282993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:58:49.283153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:58:49.283158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:37.501 12438.50 IOPS, 48.59 MiB/s 11481.69 IOPS, 44.85 MiB/s 10661.57 IOPS, 41.65 MiB/s 9962.20 IOPS, 38.91 MiB/s 10148.12 IOPS, 39.64 MiB/s 10308.24 IOPS, 40.27 MiB/s 10635.06 IOPS, 41.54 MiB/s 10959.11 IOPS, 42.81 MiB/s 11178.25 IOPS, 43.67 MiB/s 11261.43 IOPS, 43.99 MiB/s 11335.45 IOPS, 44.28 MiB/s 11527.65 IOPS, 45.03 MiB/s 11749.50 IOPS, 45.90 MiB/s [2024-09-30 22:59:02.041931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.041967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.501 [2024-09-30 22:59:02.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.501 [2024-09-30 22:59:02.042229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:37.501 [2024-09-30 22:59:02.042239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.042245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.042255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.502 [2024-09-30 22:59:02.042261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:37.502 [2024-09-30 22:59:02.044532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.502 [2024-09-30 22:59:02.044538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:37.502 11909.16 IOPS, 46.52 MiB/s 11940.00 IOPS, 46.64 MiB/s Received shutdown signal, test time was about 26.973808 seconds 00:30:37.502 00:30:37.502 Latency(us) 00:30:37.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.502 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:37.502 Verification LBA range: start 0x0 length 0x4000 00:30:37.502 Nvme0n1 : 26.97 11969.99 46.76 0.00 0.00 10674.47 283.31 3019898.88 00:30:37.502 =================================================================================================================== 00:30:37.502 Total : 11969.99 46.76 0.00 0.00 10674.47 283.31 3019898.88 00:30:37.502 [2024-09-30 22:59:04.297877] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:30:37.502 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.763 rmmod nvme_tcp 00:30:37.763 rmmod nvme_fabrics 00:30:37.763 rmmod nvme_keyring 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 837452 ']' 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 837452 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 837452 ']' 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 837452 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837452 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837452' 00:30:37.763 killing process with pid 837452 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 837452 00:30:37.763 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 837452 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.024 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.937 22:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.937 00:30:39.937 real 0m41.357s 00:30:39.937 user 1m46.907s 00:30:39.937 sys 0m11.661s 00:30:39.937 22:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:39.937 22:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:39.937 ************************************ 00:30:39.937 END TEST nvmf_host_multipath_status 00:30:39.937 ************************************ 00:30:40.200 22:59:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:40.200 22:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:40.200 22:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:40.200 22:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.200 ************************************ 00:30:40.200 START TEST nvmf_discovery_remove_ifc 00:30:40.200 ************************************ 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:40.200 * Looking for test storage... 00:30:40.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.200 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.462 --rc genhtml_branch_coverage=1 00:30:40.462 --rc genhtml_function_coverage=1 00:30:40.462 --rc genhtml_legend=1 00:30:40.462 --rc geninfo_all_blocks=1 00:30:40.462 --rc geninfo_unexecuted_blocks=1 00:30:40.462 00:30:40.462 ' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.462 --rc genhtml_branch_coverage=1 00:30:40.462 --rc genhtml_function_coverage=1 00:30:40.462 --rc genhtml_legend=1 00:30:40.462 --rc geninfo_all_blocks=1 00:30:40.462 --rc geninfo_unexecuted_blocks=1 00:30:40.462 00:30:40.462 ' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.462 --rc genhtml_branch_coverage=1 00:30:40.462 --rc genhtml_function_coverage=1 00:30:40.462 --rc genhtml_legend=1 00:30:40.462 --rc geninfo_all_blocks=1 00:30:40.462 --rc geninfo_unexecuted_blocks=1 00:30:40.462 00:30:40.462 ' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.462 --rc genhtml_branch_coverage=1 00:30:40.462 --rc genhtml_function_coverage=1 00:30:40.462 --rc genhtml_legend=1 00:30:40.462 --rc geninfo_all_blocks=1 00:30:40.462 --rc geninfo_unexecuted_blocks=1 00:30:40.462 00:30:40.462 ' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:40.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:40.462 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.463 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.604 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.605 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.605 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.605 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.605 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:30:48.605 00:30:48.605 --- 10.0.0.2 ping statistics --- 00:30:48.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.605 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:30:48.605 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:30:48.606 00:30:48.606 --- 10.0.0.1 ping statistics --- 00:30:48.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.606 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=847967 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 847967 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 847967 ']' 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:48.606 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.606 [2024-09-30 22:59:15.030872] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:30:48.606 [2024-09-30 22:59:15.030948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.606 [2024-09-30 22:59:15.120088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.606 [2024-09-30 22:59:15.214261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.606 [2024-09-30 22:59:15.214319] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.606 [2024-09-30 22:59:15.214328] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.606 [2024-09-30 22:59:15.214335] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.606 [2024-09-30 22:59:15.214341] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.606 [2024-09-30 22:59:15.214366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.867 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.867 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:48.867 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:48.867 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:48.867 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.128 [2024-09-30 22:59:15.899184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.128 [2024-09-30 22:59:15.907459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:49.128 null0 00:30:49.128 [2024-09-30 22:59:15.939405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=848198 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 848198 /tmp/host.sock 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 848198 ']' 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:49.128 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:49.128 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.128 [2024-09-30 22:59:16.016310] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:30:49.128 [2024-09-30 22:59:16.016378] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848198 ] 00:30:49.128 [2024-09-30 22:59:16.100818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.389 [2024-09-30 22:59:16.197776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.960 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.345 [2024-09-30 22:59:17.999395] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.345 [2024-09-30 22:59:17.999431] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.345 [2024-09-30 22:59:17.999447] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.345 [2024-09-30 22:59:18.087720] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:51.345 [2024-09-30 22:59:18.273050] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:51.345 [2024-09-30 22:59:18.273112] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:51.345 [2024-09-30 22:59:18.273135] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:51.345 [2024-09-30 22:59:18.273150] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.345 [2024-09-30 22:59:18.273172] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.345 [2024-09-30 22:59:18.318680] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12404d0 was disconnected and freed. delete nvme_qpair. 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:51.345 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.606 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:52.546 22:59:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:53.927 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.867 22:59:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:55.804 22:59:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:56.741 [2024-09-30 22:59:23.713357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:56.741 [2024-09-30 22:59:23.713395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.741 [2024-09-30 22:59:23.713404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.741 [2024-09-30 22:59:23.713413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.741 [2024-09-30 22:59:23.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.741 [2024-09-30 22:59:23.713424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.741 [2024-09-30 22:59:23.713430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.741 [2024-09-30 22:59:23.713435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.741 [2024-09-30 22:59:23.713440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.741 [2024-09-30 22:59:23.713446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.741 [2024-09-30 22:59:23.713451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.741 [2024-09-30 22:59:23.713456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121cf00 is same with the state(6) to be set 00:30:56.741 [2024-09-30 22:59:23.723378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121cf00 (9): Bad file descriptor 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.741 22:59:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.741 [2024-09-30 22:59:23.733415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:58.120 [2024-09-30 22:59:24.780053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:58.120 [2024-09-30 22:59:24.780144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121cf00 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-09-30 22:59:24.780175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121cf00 is same with the state(6) to be set 00:30:58.120 [2024-09-30 22:59:24.780230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121cf00 (9): Bad file descriptor 00:30:58.120 [2024-09-30 22:59:24.781335] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:58.120 [2024-09-30 22:59:24.781405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:58.120 [2024-09-30 22:59:24.781428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:58.120 [2024-09-30 22:59:24.781451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:58.120 [2024-09-30 22:59:24.781515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.120 [2024-09-30 22:59:24.781541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:58.120 22:59:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.120 22:59:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:58.120 22:59:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:59.062 [2024-09-30 22:59:25.783935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:59.062 [2024-09-30 22:59:25.783951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:59.062 [2024-09-30 22:59:25.783957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:59.062 [2024-09-30 22:59:25.783963] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:59.062 [2024-09-30 22:59:25.783972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.062 [2024-09-30 22:59:25.783988] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:59.062 [2024-09-30 22:59:25.784007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.062 [2024-09-30 22:59:25.784014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.062 [2024-09-30 22:59:25.784022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.062 [2024-09-30 22:59:25.784028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.062 [2024-09-30 22:59:25.784033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.062 [2024-09-30 22:59:25.784038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.062 [2024-09-30 22:59:25.784044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.062 [2024-09-30 22:59:25.784049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.062 [2024-09-30 22:59:25.784055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.062 [2024-09-30 22:59:25.784060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.062 [2024-09-30 22:59:25.784068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:59.062 [2024-09-30 22:59:25.784513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120c640 (9): Bad file descriptor 00:30:59.062 [2024-09-30 22:59:25.785524] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:59.062 [2024-09-30 22:59:25.785533] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.062 22:59:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.062 22:59:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:59.062 22:59:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.003 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.262 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.262 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:00.262 22:59:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.834 [2024-09-30 22:59:27.802998] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:00.834 [2024-09-30 22:59:27.803012] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:00.834 [2024-09-30 22:59:27.803021] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:01.094 [2024-09-30 22:59:27.891285] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.094 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:01.094 [2024-09-30 22:59:28.077877] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:01.094 [2024-09-30 22:59:28.077914] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:01.094 [2024-09-30 22:59:28.077928] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:01.094 [2024-09-30 22:59:28.077939] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:01.094 [2024-09-30 22:59:28.077945] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:01.094 [2024-09-30 22:59:28.081412] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x121cb20 was disconnected and freed. delete nvme_qpair. 00:31:01.095 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 848198 ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848198' 00:31:01.356 killing process with pid 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 848198 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.356 rmmod nvme_tcp 00:31:01.356 rmmod nvme_fabrics 00:31:01.356 rmmod nvme_keyring 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 847967 ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 847967 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 847967 ']' 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 847967 00:31:01.356 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 847967 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 847967' 00:31:01.617 killing process with pid 847967 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 847967 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 847967 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.617 22:59:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.162 00:31:04.162 real 0m23.620s 00:31:04.162 user 0m27.467s 00:31:04.162 sys 0m7.267s 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.162 ************************************ 00:31:04.162 END TEST nvmf_discovery_remove_ifc 00:31:04.162 ************************************ 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.162 ************************************ 00:31:04.162 START TEST nvmf_identify_kernel_target 00:31:04.162 ************************************ 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:04.162 * Looking for test storage... 00:31:04.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:04.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.162 --rc genhtml_branch_coverage=1 00:31:04.162 --rc genhtml_function_coverage=1 00:31:04.162 --rc genhtml_legend=1 00:31:04.162 --rc geninfo_all_blocks=1 00:31:04.162 --rc geninfo_unexecuted_blocks=1 00:31:04.162 00:31:04.162 ' 00:31:04.162 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:04.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.162 --rc genhtml_branch_coverage=1 00:31:04.162 --rc genhtml_function_coverage=1 00:31:04.162 --rc genhtml_legend=1 00:31:04.162 --rc geninfo_all_blocks=1 00:31:04.162 --rc geninfo_unexecuted_blocks=1 00:31:04.162 00:31:04.162 ' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:04.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.163 --rc genhtml_branch_coverage=1 00:31:04.163 --rc genhtml_function_coverage=1 00:31:04.163 --rc genhtml_legend=1 00:31:04.163 --rc geninfo_all_blocks=1 00:31:04.163 --rc geninfo_unexecuted_blocks=1 00:31:04.163 00:31:04.163 ' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:04.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.163 --rc genhtml_branch_coverage=1 00:31:04.163 --rc genhtml_function_coverage=1 00:31:04.163 --rc genhtml_legend=1 00:31:04.163 --rc geninfo_all_blocks=1 00:31:04.163 --rc geninfo_unexecuted_blocks=1 00:31:04.163 00:31:04.163 ' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.163 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:12.311 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:12.311 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:12.311 Found net devices under 0000:31:00.0: cvl_0_0 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:12.311 Found net devices under 0000:31:00.1: cvl_0_1 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:31:12.311 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:31:12.312 00:31:12.312 --- 10.0.0.2 ping statistics --- 00:31:12.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.312 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:31:12.312 00:31:12.312 --- 10.0.0.1 ping statistics --- 00:31:12.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.312 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:12.312 22:59:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:15.687 Waiting for block devices as requested 00:31:15.687 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:15.687 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:15.687 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:15.687 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:15.687 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:15.948 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:15.948 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:15.948 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:15.948 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:16.209 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:16.470 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:16.470 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:16.470 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:16.731 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:16.731 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:16.731 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:16.731 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:17.302 No valid GPT data, bailing 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:17.302 00:31:17.302 Discovery Log Number of Records 2, Generation counter 2 00:31:17.302 =====Discovery Log Entry 0====== 00:31:17.302 trtype: tcp 00:31:17.302 adrfam: ipv4 00:31:17.302 subtype: current discovery subsystem 00:31:17.302 treq: not specified, sq flow control disable supported 00:31:17.302 portid: 1 00:31:17.302 trsvcid: 4420 00:31:17.302 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:17.302 traddr: 10.0.0.1 00:31:17.302 eflags: none 00:31:17.302 sectype: none 00:31:17.302 =====Discovery Log Entry 1====== 00:31:17.302 trtype: tcp 00:31:17.302 adrfam: ipv4 00:31:17.302 subtype: nvme subsystem 00:31:17.302 treq: not specified, sq flow control disable supported 00:31:17.302 portid: 1 00:31:17.302 trsvcid: 4420 00:31:17.302 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:17.302 traddr: 10.0.0.1 00:31:17.302 eflags: none 00:31:17.302 sectype: none 00:31:17.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:17.302 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:17.564 ===================================================== 00:31:17.564 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:17.564 ===================================================== 00:31:17.564 Controller Capabilities/Features 00:31:17.564 ================================ 00:31:17.564 Vendor ID: 0000 00:31:17.564 Subsystem Vendor ID: 0000 00:31:17.564 Serial Number: 84923ebb2f43800a277f 00:31:17.564 Model Number: Linux 00:31:17.564 Firmware Version: 6.8.9-20 00:31:17.564 Recommended Arb Burst: 0 00:31:17.564 IEEE OUI Identifier: 00 00 00 00:31:17.564 Multi-path I/O 00:31:17.564 May have multiple subsystem ports: No 00:31:17.564 May have multiple controllers: No 00:31:17.564 Associated with SR-IOV VF: No 00:31:17.564 Max Data Transfer Size: Unlimited 00:31:17.564 Max Number of Namespaces: 0 00:31:17.564 Max Number of I/O Queues: 1024 00:31:17.564 NVMe Specification Version (VS): 1.3 00:31:17.564 NVMe Specification Version (Identify): 1.3 00:31:17.564 Maximum Queue Entries: 1024 00:31:17.564 Contiguous Queues Required: No 00:31:17.564 Arbitration Mechanisms Supported 00:31:17.564 Weighted Round Robin: Not Supported 00:31:17.564 Vendor Specific: Not Supported 00:31:17.564 Reset Timeout: 7500 ms 00:31:17.564 Doorbell Stride: 4 bytes 00:31:17.564 NVM Subsystem Reset: Not Supported 00:31:17.564 Command Sets Supported 00:31:17.564 NVM Command Set: Supported 00:31:17.564 Boot Partition: Not Supported 00:31:17.564 Memory Page Size Minimum: 4096 bytes 00:31:17.564 Memory Page Size Maximum: 4096 bytes 00:31:17.564 Persistent Memory Region: Not Supported 00:31:17.564 Optional Asynchronous Events Supported 00:31:17.564 Namespace Attribute Notices: Not Supported 00:31:17.564 Firmware Activation Notices: Not Supported 00:31:17.564 ANA Change Notices: Not Supported 00:31:17.564 PLE Aggregate Log Change Notices: Not Supported 00:31:17.564 LBA Status Info Alert Notices: Not Supported 00:31:17.564 EGE Aggregate Log Change Notices: Not Supported 00:31:17.564 Normal NVM Subsystem Shutdown event: Not Supported 00:31:17.564 Zone Descriptor Change Notices: Not Supported 00:31:17.564 Discovery Log Change Notices: Supported 00:31:17.564 Controller Attributes 00:31:17.564 128-bit Host Identifier: Not Supported 00:31:17.564 Non-Operational Permissive Mode: Not Supported 00:31:17.564 NVM Sets: Not Supported 00:31:17.564 Read Recovery Levels: Not Supported 00:31:17.564 Endurance Groups: Not Supported 00:31:17.564 Predictable Latency Mode: Not Supported 00:31:17.564 Traffic Based Keep ALive: Not Supported 00:31:17.564 Namespace Granularity: Not Supported 00:31:17.564 SQ Associations: Not Supported 00:31:17.564 UUID List: Not Supported 00:31:17.564 Multi-Domain Subsystem: Not Supported 00:31:17.564 Fixed Capacity Management: Not Supported 00:31:17.564 Variable Capacity Management: Not Supported 00:31:17.564 Delete Endurance Group: Not Supported 00:31:17.564 Delete NVM Set: Not Supported 00:31:17.564 Extended LBA Formats Supported: Not Supported 00:31:17.564 Flexible Data Placement Supported: Not Supported 00:31:17.564 00:31:17.564 Controller Memory Buffer Support 00:31:17.564 ================================ 00:31:17.564 Supported: No 00:31:17.564 00:31:17.564 Persistent Memory Region Support 00:31:17.564 ================================ 00:31:17.564 Supported: No 00:31:17.564 00:31:17.564 Admin Command Set Attributes 00:31:17.564 ============================ 00:31:17.564 Security Send/Receive: Not Supported 00:31:17.564 Format NVM: Not Supported 00:31:17.564 Firmware Activate/Download: Not Supported 00:31:17.564 Namespace Management: Not Supported 00:31:17.564 Device Self-Test: Not Supported 00:31:17.564 Directives: Not Supported 00:31:17.564 NVMe-MI: Not Supported 00:31:17.564 Virtualization Management: Not Supported 00:31:17.564 Doorbell Buffer Config: Not Supported 00:31:17.564 Get LBA Status Capability: Not Supported 00:31:17.564 Command & Feature Lockdown Capability: Not Supported 00:31:17.564 Abort Command Limit: 1 00:31:17.564 Async Event Request Limit: 1 00:31:17.564 Number of Firmware Slots: N/A 00:31:17.564 Firmware Slot 1 Read-Only: N/A 00:31:17.564 Firmware Activation Without Reset: N/A 00:31:17.564 Multiple Update Detection Support: N/A 00:31:17.564 Firmware Update Granularity: No Information Provided 00:31:17.564 Per-Namespace SMART Log: No 00:31:17.564 Asymmetric Namespace Access Log Page: Not Supported 00:31:17.564 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:17.564 Command Effects Log Page: Not Supported 00:31:17.564 Get Log Page Extended Data: Supported 00:31:17.564 Telemetry Log Pages: Not Supported 00:31:17.564 Persistent Event Log Pages: Not Supported 00:31:17.564 Supported Log Pages Log Page: May Support 00:31:17.564 Commands Supported & Effects Log Page: Not Supported 00:31:17.564 Feature Identifiers & Effects Log Page:May Support 00:31:17.564 NVMe-MI Commands & Effects Log Page: May Support 00:31:17.564 Data Area 4 for Telemetry Log: Not Supported 00:31:17.564 Error Log Page Entries Supported: 1 00:31:17.564 Keep Alive: Not Supported 00:31:17.564 00:31:17.564 NVM Command Set Attributes 00:31:17.564 ========================== 00:31:17.564 Submission Queue Entry Size 00:31:17.564 Max: 1 00:31:17.564 Min: 1 00:31:17.564 Completion Queue Entry Size 00:31:17.564 Max: 1 00:31:17.564 Min: 1 00:31:17.564 Number of Namespaces: 0 00:31:17.564 Compare Command: Not Supported 00:31:17.564 Write Uncorrectable Command: Not Supported 00:31:17.564 Dataset Management Command: Not Supported 00:31:17.564 Write Zeroes Command: Not Supported 00:31:17.564 Set Features Save Field: Not Supported 00:31:17.564 Reservations: Not Supported 00:31:17.564 Timestamp: Not Supported 00:31:17.564 Copy: Not Supported 00:31:17.564 Volatile Write Cache: Not Present 00:31:17.564 Atomic Write Unit (Normal): 1 00:31:17.564 Atomic Write Unit (PFail): 1 00:31:17.564 Atomic Compare & Write Unit: 1 00:31:17.564 Fused Compare & Write: Not Supported 00:31:17.564 Scatter-Gather List 00:31:17.564 SGL Command Set: Supported 00:31:17.564 SGL Keyed: Not Supported 00:31:17.564 SGL Bit Bucket Descriptor: Not Supported 00:31:17.564 SGL Metadata Pointer: Not Supported 00:31:17.564 Oversized SGL: Not Supported 00:31:17.564 SGL Metadata Address: Not Supported 00:31:17.564 SGL Offset: Supported 00:31:17.564 Transport SGL Data Block: Not Supported 00:31:17.564 Replay Protected Memory Block: Not Supported 00:31:17.564 00:31:17.564 Firmware Slot Information 00:31:17.564 ========================= 00:31:17.564 Active slot: 0 00:31:17.564 00:31:17.564 00:31:17.564 Error Log 00:31:17.564 ========= 00:31:17.564 00:31:17.564 Active Namespaces 00:31:17.564 ================= 00:31:17.564 Discovery Log Page 00:31:17.564 ================== 00:31:17.564 Generation Counter: 2 00:31:17.564 Number of Records: 2 00:31:17.565 Record Format: 0 00:31:17.565 00:31:17.565 Discovery Log Entry 0 00:31:17.565 ---------------------- 00:31:17.565 Transport Type: 3 (TCP) 00:31:17.565 Address Family: 1 (IPv4) 00:31:17.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:17.565 Entry Flags: 00:31:17.565 Duplicate Returned Information: 0 00:31:17.565 Explicit Persistent Connection Support for Discovery: 0 00:31:17.565 Transport Requirements: 00:31:17.565 Secure Channel: Not Specified 00:31:17.565 Port ID: 1 (0x0001) 00:31:17.565 Controller ID: 65535 (0xffff) 00:31:17.565 Admin Max SQ Size: 32 00:31:17.565 Transport Service Identifier: 4420 00:31:17.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:17.565 Transport Address: 10.0.0.1 00:31:17.565 Discovery Log Entry 1 00:31:17.565 ---------------------- 00:31:17.565 Transport Type: 3 (TCP) 00:31:17.565 Address Family: 1 (IPv4) 00:31:17.565 Subsystem Type: 2 (NVM Subsystem) 00:31:17.565 Entry Flags: 00:31:17.565 Duplicate Returned Information: 0 00:31:17.565 Explicit Persistent Connection Support for Discovery: 0 00:31:17.565 Transport Requirements: 00:31:17.565 Secure Channel: Not Specified 00:31:17.565 Port ID: 1 (0x0001) 00:31:17.565 Controller ID: 65535 (0xffff) 00:31:17.565 Admin Max SQ Size: 32 00:31:17.565 Transport Service Identifier: 4420 00:31:17.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:17.565 Transport Address: 10.0.0.1 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.565 get_feature(0x01) failed 00:31:17.565 get_feature(0x02) failed 00:31:17.565 get_feature(0x04) failed 00:31:17.565 ===================================================== 00:31:17.565 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:17.565 ===================================================== 00:31:17.565 Controller Capabilities/Features 00:31:17.565 ================================ 00:31:17.565 Vendor ID: 0000 00:31:17.565 Subsystem Vendor ID: 0000 00:31:17.565 Serial Number: 77c09529f91c297c0664 00:31:17.565 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:17.565 Firmware Version: 6.8.9-20 00:31:17.565 Recommended Arb Burst: 6 00:31:17.565 IEEE OUI Identifier: 00 00 00 00:31:17.565 Multi-path I/O 00:31:17.565 May have multiple subsystem ports: Yes 00:31:17.565 May have multiple controllers: Yes 00:31:17.565 Associated with SR-IOV VF: No 00:31:17.565 Max Data Transfer Size: Unlimited 00:31:17.565 Max Number of Namespaces: 1024 00:31:17.565 Max Number of I/O Queues: 128 00:31:17.565 NVMe Specification Version (VS): 1.3 00:31:17.565 NVMe Specification Version (Identify): 1.3 00:31:17.565 Maximum Queue Entries: 1024 00:31:17.565 Contiguous Queues Required: No 00:31:17.565 Arbitration Mechanisms Supported 00:31:17.565 Weighted Round Robin: Not Supported 00:31:17.565 Vendor Specific: Not Supported 00:31:17.565 Reset Timeout: 7500 ms 00:31:17.565 Doorbell Stride: 4 bytes 00:31:17.565 NVM Subsystem Reset: Not Supported 00:31:17.565 Command Sets Supported 00:31:17.565 NVM Command Set: Supported 00:31:17.565 Boot Partition: Not Supported 00:31:17.565 Memory Page Size Minimum: 4096 bytes 00:31:17.565 Memory Page Size Maximum: 4096 bytes 00:31:17.565 Persistent Memory Region: Not Supported 00:31:17.565 Optional Asynchronous Events Supported 00:31:17.565 Namespace Attribute Notices: Supported 00:31:17.565 Firmware Activation Notices: Not Supported 00:31:17.565 ANA Change Notices: Supported 00:31:17.565 PLE Aggregate Log Change Notices: Not Supported 00:31:17.565 LBA Status Info Alert Notices: Not Supported 00:31:17.565 EGE Aggregate Log Change Notices: Not Supported 00:31:17.565 Normal NVM Subsystem Shutdown event: Not Supported 00:31:17.565 Zone Descriptor Change Notices: Not Supported 00:31:17.565 Discovery Log Change Notices: Not Supported 00:31:17.565 Controller Attributes 00:31:17.565 128-bit Host Identifier: Supported 00:31:17.565 Non-Operational Permissive Mode: Not Supported 00:31:17.565 NVM Sets: Not Supported 00:31:17.565 Read Recovery Levels: Not Supported 00:31:17.565 Endurance Groups: Not Supported 00:31:17.565 Predictable Latency Mode: Not Supported 00:31:17.565 Traffic Based Keep ALive: Supported 00:31:17.565 Namespace Granularity: Not Supported 00:31:17.565 SQ Associations: Not Supported 00:31:17.565 UUID List: Not Supported 00:31:17.565 Multi-Domain Subsystem: Not Supported 00:31:17.565 Fixed Capacity Management: Not Supported 00:31:17.565 Variable Capacity Management: Not Supported 00:31:17.565 Delete Endurance Group: Not Supported 00:31:17.565 Delete NVM Set: Not Supported 00:31:17.565 Extended LBA Formats Supported: Not Supported 00:31:17.565 Flexible Data Placement Supported: Not Supported 00:31:17.565 00:31:17.565 Controller Memory Buffer Support 00:31:17.565 ================================ 00:31:17.565 Supported: No 00:31:17.565 00:31:17.565 Persistent Memory Region Support 00:31:17.565 ================================ 00:31:17.565 Supported: No 00:31:17.565 00:31:17.565 Admin Command Set Attributes 00:31:17.565 ============================ 00:31:17.565 Security Send/Receive: Not Supported 00:31:17.565 Format NVM: Not Supported 00:31:17.565 Firmware Activate/Download: Not Supported 00:31:17.565 Namespace Management: Not Supported 00:31:17.565 Device Self-Test: Not Supported 00:31:17.565 Directives: Not Supported 00:31:17.565 NVMe-MI: Not Supported 00:31:17.565 Virtualization Management: Not Supported 00:31:17.565 Doorbell Buffer Config: Not Supported 00:31:17.565 Get LBA Status Capability: Not Supported 00:31:17.565 Command & Feature Lockdown Capability: Not Supported 00:31:17.565 Abort Command Limit: 4 00:31:17.565 Async Event Request Limit: 4 00:31:17.565 Number of Firmware Slots: N/A 00:31:17.565 Firmware Slot 1 Read-Only: N/A 00:31:17.565 Firmware Activation Without Reset: N/A 00:31:17.565 Multiple Update Detection Support: N/A 00:31:17.565 Firmware Update Granularity: No Information Provided 00:31:17.565 Per-Namespace SMART Log: Yes 00:31:17.565 Asymmetric Namespace Access Log Page: Supported 00:31:17.565 ANA Transition Time : 10 sec 00:31:17.565 00:31:17.565 Asymmetric Namespace Access Capabilities 00:31:17.565 ANA Optimized State : Supported 00:31:17.565 ANA Non-Optimized State : Supported 00:31:17.565 ANA Inaccessible State : Supported 00:31:17.565 ANA Persistent Loss State : Supported 00:31:17.565 ANA Change State : Supported 00:31:17.565 ANAGRPID is not changed : No 00:31:17.565 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:17.565 00:31:17.565 ANA Group Identifier Maximum : 128 00:31:17.565 Number of ANA Group Identifiers : 128 00:31:17.565 Max Number of Allowed Namespaces : 1024 00:31:17.565 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:17.565 Command Effects Log Page: Supported 00:31:17.565 Get Log Page Extended Data: Supported 00:31:17.565 Telemetry Log Pages: Not Supported 00:31:17.565 Persistent Event Log Pages: Not Supported 00:31:17.565 Supported Log Pages Log Page: May Support 00:31:17.565 Commands Supported & Effects Log Page: Not Supported 00:31:17.565 Feature Identifiers & Effects Log Page:May Support 00:31:17.565 NVMe-MI Commands & Effects Log Page: May Support 00:31:17.565 Data Area 4 for Telemetry Log: Not Supported 00:31:17.565 Error Log Page Entries Supported: 128 00:31:17.565 Keep Alive: Supported 00:31:17.565 Keep Alive Granularity: 1000 ms 00:31:17.565 00:31:17.565 NVM Command Set Attributes 00:31:17.565 ========================== 00:31:17.565 Submission Queue Entry Size 00:31:17.565 Max: 64 00:31:17.565 Min: 64 00:31:17.565 Completion Queue Entry Size 00:31:17.565 Max: 16 00:31:17.565 Min: 16 00:31:17.565 Number of Namespaces: 1024 00:31:17.565 Compare Command: Not Supported 00:31:17.565 Write Uncorrectable Command: Not Supported 00:31:17.565 Dataset Management Command: Supported 00:31:17.565 Write Zeroes Command: Supported 00:31:17.565 Set Features Save Field: Not Supported 00:31:17.565 Reservations: Not Supported 00:31:17.565 Timestamp: Not Supported 00:31:17.565 Copy: Not Supported 00:31:17.565 Volatile Write Cache: Present 00:31:17.565 Atomic Write Unit (Normal): 1 00:31:17.565 Atomic Write Unit (PFail): 1 00:31:17.565 Atomic Compare & Write Unit: 1 00:31:17.565 Fused Compare & Write: Not Supported 00:31:17.565 Scatter-Gather List 00:31:17.565 SGL Command Set: Supported 00:31:17.565 SGL Keyed: Not Supported 00:31:17.565 SGL Bit Bucket Descriptor: Not Supported 00:31:17.565 SGL Metadata Pointer: Not Supported 00:31:17.565 Oversized SGL: Not Supported 00:31:17.565 SGL Metadata Address: Not Supported 00:31:17.565 SGL Offset: Supported 00:31:17.565 Transport SGL Data Block: Not Supported 00:31:17.565 Replay Protected Memory Block: Not Supported 00:31:17.565 00:31:17.565 Firmware Slot Information 00:31:17.565 ========================= 00:31:17.565 Active slot: 0 00:31:17.565 00:31:17.565 Asymmetric Namespace Access 00:31:17.565 =========================== 00:31:17.565 Change Count : 0 00:31:17.565 Number of ANA Group Descriptors : 1 00:31:17.565 ANA Group Descriptor : 0 00:31:17.565 ANA Group ID : 1 00:31:17.565 Number of NSID Values : 1 00:31:17.565 Change Count : 0 00:31:17.565 ANA State : 1 00:31:17.565 Namespace Identifier : 1 00:31:17.565 00:31:17.565 Commands Supported and Effects 00:31:17.565 ============================== 00:31:17.565 Admin Commands 00:31:17.565 -------------- 00:31:17.565 Get Log Page (02h): Supported 00:31:17.565 Identify (06h): Supported 00:31:17.565 Abort (08h): Supported 00:31:17.565 Set Features (09h): Supported 00:31:17.565 Get Features (0Ah): Supported 00:31:17.565 Asynchronous Event Request (0Ch): Supported 00:31:17.565 Keep Alive (18h): Supported 00:31:17.565 I/O Commands 00:31:17.565 ------------ 00:31:17.565 Flush (00h): Supported 00:31:17.565 Write (01h): Supported LBA-Change 00:31:17.565 Read (02h): Supported 00:31:17.565 Write Zeroes (08h): Supported LBA-Change 00:31:17.565 Dataset Management (09h): Supported 00:31:17.565 00:31:17.565 Error Log 00:31:17.565 ========= 00:31:17.565 Entry: 0 00:31:17.565 Error Count: 0x3 00:31:17.565 Submission Queue Id: 0x0 00:31:17.565 Command Id: 0x5 00:31:17.565 Phase Bit: 0 00:31:17.565 Status Code: 0x2 00:31:17.565 Status Code Type: 0x0 00:31:17.565 Do Not Retry: 1 00:31:17.565 Error Location: 0x28 00:31:17.565 LBA: 0x0 00:31:17.565 Namespace: 0x0 00:31:17.565 Vendor Log Page: 0x0 00:31:17.565 ----------- 00:31:17.565 Entry: 1 00:31:17.565 Error Count: 0x2 00:31:17.565 Submission Queue Id: 0x0 00:31:17.565 Command Id: 0x5 00:31:17.565 Phase Bit: 0 00:31:17.565 Status Code: 0x2 00:31:17.565 Status Code Type: 0x0 00:31:17.565 Do Not Retry: 1 00:31:17.565 Error Location: 0x28 00:31:17.565 LBA: 0x0 00:31:17.565 Namespace: 0x0 00:31:17.565 Vendor Log Page: 0x0 00:31:17.565 ----------- 00:31:17.565 Entry: 2 00:31:17.565 Error Count: 0x1 00:31:17.565 Submission Queue Id: 0x0 00:31:17.565 Command Id: 0x4 00:31:17.565 Phase Bit: 0 00:31:17.565 Status Code: 0x2 00:31:17.565 Status Code Type: 0x0 00:31:17.565 Do Not Retry: 1 00:31:17.565 Error Location: 0x28 00:31:17.565 LBA: 0x0 00:31:17.565 Namespace: 0x0 00:31:17.565 Vendor Log Page: 0x0 00:31:17.565 00:31:17.565 Number of Queues 00:31:17.565 ================ 00:31:17.565 Number of I/O Submission Queues: 128 00:31:17.565 Number of I/O Completion Queues: 128 00:31:17.565 00:31:17.565 ZNS Specific Controller Data 00:31:17.565 ============================ 00:31:17.565 Zone Append Size Limit: 0 00:31:17.565 00:31:17.565 00:31:17.565 Active Namespaces 00:31:17.565 ================= 00:31:17.565 get_feature(0x05) failed 00:31:17.565 Namespace ID:1 00:31:17.565 Command Set Identifier: NVM (00h) 00:31:17.565 Deallocate: Supported 00:31:17.565 Deallocated/Unwritten Error: Not Supported 00:31:17.565 Deallocated Read Value: Unknown 00:31:17.565 Deallocate in Write Zeroes: Not Supported 00:31:17.565 Deallocated Guard Field: 0xFFFF 00:31:17.565 Flush: Supported 00:31:17.565 Reservation: Not Supported 00:31:17.565 Namespace Sharing Capabilities: Multiple Controllers 00:31:17.565 Size (in LBAs): 3750748848 (1788GiB) 00:31:17.565 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:17.565 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:17.565 UUID: 2c24a332-7640-4070-bd85-9cafecf88fa6 00:31:17.565 Thin Provisioning: Not Supported 00:31:17.565 Per-NS Atomic Units: Yes 00:31:17.565 Atomic Write Unit (Normal): 8 00:31:17.565 Atomic Write Unit (PFail): 8 00:31:17.565 Preferred Write Granularity: 8 00:31:17.565 Atomic Compare & Write Unit: 8 00:31:17.565 Atomic Boundary Size (Normal): 0 00:31:17.565 Atomic Boundary Size (PFail): 0 00:31:17.565 Atomic Boundary Offset: 0 00:31:17.565 NGUID/EUI64 Never Reused: No 00:31:17.565 ANA group ID: 1 00:31:17.565 Namespace Write Protected: No 00:31:17.565 Number of LBA Formats: 1 00:31:17.565 Current LBA Format: LBA Format #00 00:31:17.565 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:17.565 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.565 rmmod nvme_tcp 00:31:17.565 rmmod nvme_fabrics 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.565 22:59:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:31:20.111 22:59:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:23.413 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:23.413 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:23.745 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:23.745 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:24.094 00:31:24.094 real 0m20.092s 00:31:24.094 user 0m5.483s 00:31:24.094 sys 0m11.566s 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.094 ************************************ 00:31:24.094 END TEST nvmf_identify_kernel_target 00:31:24.094 ************************************ 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.094 ************************************ 00:31:24.094 START TEST nvmf_auth_host 00:31:24.094 ************************************ 00:31:24.094 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:24.094 * Looking for test storage... 00:31:24.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.094 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:24.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.433 --rc genhtml_branch_coverage=1 00:31:24.433 --rc genhtml_function_coverage=1 00:31:24.433 --rc genhtml_legend=1 00:31:24.433 --rc geninfo_all_blocks=1 00:31:24.433 --rc geninfo_unexecuted_blocks=1 00:31:24.433 00:31:24.433 ' 00:31:24.433 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:24.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.433 --rc genhtml_branch_coverage=1 00:31:24.433 --rc genhtml_function_coverage=1 00:31:24.433 --rc genhtml_legend=1 00:31:24.433 --rc geninfo_all_blocks=1 00:31:24.433 --rc geninfo_unexecuted_blocks=1 00:31:24.434 00:31:24.434 ' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:24.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.434 --rc genhtml_branch_coverage=1 00:31:24.434 --rc genhtml_function_coverage=1 00:31:24.434 --rc genhtml_legend=1 00:31:24.434 --rc geninfo_all_blocks=1 00:31:24.434 --rc geninfo_unexecuted_blocks=1 00:31:24.434 00:31:24.434 ' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:24.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.434 --rc genhtml_branch_coverage=1 00:31:24.434 --rc genhtml_function_coverage=1 00:31:24.434 --rc genhtml_legend=1 00:31:24.434 --rc geninfo_all_blocks=1 00:31:24.434 --rc geninfo_unexecuted_blocks=1 00:31:24.434 00:31:24.434 ' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:24.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.434 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:32.584 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:32.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:32.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:32.585 Found net devices under 0000:31:00.0: cvl_0_0 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:32.585 Found net devices under 0000:31:00.1: cvl_0_1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:31:32.585 00:31:32.585 --- 10.0.0.2 ping statistics --- 00:31:32.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.585 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:31:32.585 00:31:32.585 --- 10.0.0.1 ping statistics --- 00:31:32.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.585 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=862694 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 862694 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 862694 ']' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.585 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=342f89df636e15d30837895a4b766b65 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Bpd 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 342f89df636e15d30837895a4b766b65 0 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 342f89df636e15d30837895a4b766b65 0 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=342f89df636e15d30837895a4b766b65 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Bpd 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Bpd 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Bpd 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:31:32.847 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ae5180fcc5a856735cd52fd9fcbd1daf0a984f4efacae8567910d280a57f9868 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.4QJ 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ae5180fcc5a856735cd52fd9fcbd1daf0a984f4efacae8567910d280a57f9868 3 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ae5180fcc5a856735cd52fd9fcbd1daf0a984f4efacae8567910d280a57f9868 3 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ae5180fcc5a856735cd52fd9fcbd1daf0a984f4efacae8567910d280a57f9868 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:31:32.848 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.4QJ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.4QJ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4QJ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ab396c06a73567207d6a708b0761da9932a1f33c2250374f 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NaZ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ab396c06a73567207d6a708b0761da9932a1f33c2250374f 0 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ab396c06a73567207d6a708b0761da9932a1f33c2250374f 0 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ab396c06a73567207d6a708b0761da9932a1f33c2250374f 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NaZ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NaZ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NaZ 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9909671c2dd45f748e9ba102cd840f019d9d8a0dcac2cbe8 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.QNj 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9909671c2dd45f748e9ba102cd840f019d9d8a0dcac2cbe8 2 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9909671c2dd45f748e9ba102cd840f019d9d8a0dcac2cbe8 2 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.109 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.110 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9909671c2dd45f748e9ba102cd840f019d9d8a0dcac2cbe8 00:31:33.110 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:31:33.110 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.QNj 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.QNj 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QNj 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=fe6c1590b77882bc0f41afb948d88975 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.EeM 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key fe6c1590b77882bc0f41afb948d88975 1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 fe6c1590b77882bc0f41afb948d88975 1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=fe6c1590b77882bc0f41afb948d88975 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.EeM 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.EeM 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EeM 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ab217457af2dd637dfad45235d125e56 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.wQD 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ab217457af2dd637dfad45235d125e56 1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ab217457af2dd637dfad45235d125e56 1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ab217457af2dd637dfad45235d125e56 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:31:33.110 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.wQD 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.wQD 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wQD 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1667b160fc522a66fdb53b4c9d300e89e0ff5da97531332c 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.9OE 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1667b160fc522a66fdb53b4c9d300e89e0ff5da97531332c 2 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1667b160fc522a66fdb53b4c9d300e89e0ff5da97531332c 2 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1667b160fc522a66fdb53b4c9d300e89e0ff5da97531332c 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.9OE 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.9OE 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9OE 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d3cbb09d2e1869ccbff63d3295179d1a 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.8E0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d3cbb09d2e1869ccbff63d3295179d1a 0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d3cbb09d2e1869ccbff63d3295179d1a 0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d3cbb09d2e1869ccbff63d3295179d1a 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.8E0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.8E0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8E0 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7b4a24fcc22a6790d5eba1824e48aca2e1866e465b39ae4ec7533dc4f35194fa 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.7i6 00:31:33.372 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7b4a24fcc22a6790d5eba1824e48aca2e1866e465b39ae4ec7533dc4f35194fa 3 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7b4a24fcc22a6790d5eba1824e48aca2e1866e465b39ae4ec7533dc4f35194fa 3 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7b4a24fcc22a6790d5eba1824e48aca2e1866e465b39ae4ec7533dc4f35194fa 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.7i6 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.7i6 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7i6 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 862694 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 862694 ']' 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:33.373 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bpd 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4QJ ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4QJ 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NaZ 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QNj ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QNj 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EeM 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wQD ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wQD 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9OE 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.633 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8E0 ]] 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8E0 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7i6 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:33.893 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:33.894 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:37.194 Waiting for block devices as requested 00:31:37.455 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:37.455 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:37.455 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:37.715 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:37.715 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:37.715 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:37.976 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:37.976 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:37.976 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:38.237 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:38.237 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:38.237 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:38.498 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:38.498 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:38.498 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:38.498 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:38.759 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:39.701 No valid GPT data, bailing 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:39.701 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:39.701 00:31:39.701 Discovery Log Number of Records 2, Generation counter 2 00:31:39.701 =====Discovery Log Entry 0====== 00:31:39.701 trtype: tcp 00:31:39.701 adrfam: ipv4 00:31:39.702 subtype: current discovery subsystem 00:31:39.702 treq: not specified, sq flow control disable supported 00:31:39.702 portid: 1 00:31:39.702 trsvcid: 4420 00:31:39.702 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:39.702 traddr: 10.0.0.1 00:31:39.702 eflags: none 00:31:39.702 sectype: none 00:31:39.702 =====Discovery Log Entry 1====== 00:31:39.702 trtype: tcp 00:31:39.702 adrfam: ipv4 00:31:39.702 subtype: nvme subsystem 00:31:39.702 treq: not specified, sq flow control disable supported 00:31:39.702 portid: 1 00:31:39.702 trsvcid: 4420 00:31:39.702 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:39.702 traddr: 10.0.0.1 00:31:39.702 eflags: none 00:31:39.702 sectype: none 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.702 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.962 nvme0n1 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.962 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.963 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.224 nvme0n1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.224 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 nvme0n1 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.484 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 nvme0n1 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.744 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.004 nvme0n1 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.004 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.004 nvme0n1 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.265 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.525 nvme0n1 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:41.525 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.526 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 nvme0n1 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.786 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 nvme0n1 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:42.046 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.047 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.307 nvme0n1 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.307 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 nvme0n1 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:42.568 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.830 nvme0n1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.830 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.090 nvme0n1 00:31:43.090 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.090 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.090 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.090 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:43.091 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.351 nvme0n1 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.351 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.612 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.874 nvme0n1 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.874 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.135 nvme0n1 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.135 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.709 nvme0n1 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.709 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.710 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.283 nvme0n1 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.283 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.543 nvme0n1 00:31:45.543 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.544 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.544 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.544 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.544 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.544 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.805 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.067 nvme0n1 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.067 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.328 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.589 nvme0n1 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.589 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.590 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.851 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 nvme0n1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.425 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.997 nvme0n1 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.997 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.259 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.832 nvme0n1 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.832 23:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 nvme0n1 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.775 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.346 nvme0n1 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.346 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.347 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.608 nvme0n1 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:50.608 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.609 nvme0n1 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.609 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.870 nvme0n1 00:31:50.870 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.871 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.871 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.871 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.871 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.871 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:51.131 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:51.132 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.132 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.132 23:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.132 nvme0n1 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.132 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.393 nvme0n1 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.393 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.394 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.655 nvme0n1 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.655 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:51.656 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:51.656 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:51.656 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.656 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.656 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.916 nvme0n1 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:51.916 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.917 23:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 nvme0n1 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.182 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.183 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.446 nvme0n1 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.446 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.708 nvme0n1 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.708 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.968 nvme0n1 00:31:52.968 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.968 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.968 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.968 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.968 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.228 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.228 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.228 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.228 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.228 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.229 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.491 nvme0n1 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.491 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.752 nvme0n1 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:53.752 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.753 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.013 nvme0n1 00:31:54.013 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.013 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.013 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.013 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.013 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.013 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.273 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.273 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.273 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.274 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.534 nvme0n1 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.534 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.535 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.106 nvme0n1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.106 23:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.367 nvme0n1 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.367 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.627 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.628 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.888 nvme0n1 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.888 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.148 23:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.409 nvme0n1 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.409 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.981 nvme0n1 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.981 23:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.553 nvme0n1 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.553 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.814 23:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.387 nvme0n1 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.387 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 nvme0n1 00:31:59.328 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.328 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.328 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.328 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.328 23:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.328 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.329 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.900 nvme0n1 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.900 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.901 23:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.471 nvme0n1 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.471 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.732 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 nvme0n1 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.733 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.992 nvme0n1 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.992 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.993 23:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:00.993 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.252 nvme0n1 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.252 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.253 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.513 nvme0n1 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.513 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.773 nvme0n1 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.773 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.033 nvme0n1 00:32:02.033 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.033 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.034 23:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.294 nvme0n1 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.294 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.554 nvme0n1 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.554 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.814 nvme0n1 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.814 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.074 nvme0n1 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.074 23:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.074 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.335 nvme0n1 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.335 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.594 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.854 nvme0n1 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.854 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.855 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.135 nvme0n1 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.135 23:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.135 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.396 nvme0n1 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.396 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.397 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.656 nvme0n1 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.656 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.917 23:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.177 nvme0n1 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.177 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.478 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.478 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.478 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.479 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.740 nvme0n1 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:05.740 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.741 23:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.311 nvme0n1 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.311 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.882 nvme0n1 00:32:06.882 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.883 23:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.149 nvme0n1 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.149 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyZjg5ZGY2MzZlMTVkMzA4Mzc4OTVhNGI3NjZiNjVNwPXJ: 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU1MTgwZmNjNWE4NTY3MzVjZDUyZmQ5ZmNiZDFkYWYwYTk4NGY0ZWZhY2FlODU2NzkxMGQyODBhNTdmOTg2OM3+4Uk=: 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.442 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.082 nvme0n1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:08.082 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.083 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.083 23:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.654 nvme0n1 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.654 23:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.596 nvme0n1 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.596 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTY2N2IxNjBmYzUyMmE2NmZkYjUzYjRjOWQzMDBlODllMGZmNWRhOTc1MzEzMzJjULKwhA==: 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNjYmIwOWQyZTE4NjljY2JmZjYzZDMyOTUxNzlkMWFjdxLn: 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.597 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.170 nvme0n1 00:32:10.170 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.170 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.170 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.170 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.170 23:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0YTI0ZmNjMjJhNjc5MGQ1ZWJhMTgyNGU0OGFjYTJlMTg2NmU0NjViMzlhZTRlYzc1MzNkYzRmMzUxOTRmYRTXvtA=: 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.170 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.741 nvme0n1 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.741 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.002 request: 00:32:11.002 { 00:32:11.002 "name": "nvme0", 00:32:11.002 "trtype": "tcp", 00:32:11.002 "traddr": "10.0.0.1", 00:32:11.002 "adrfam": "ipv4", 00:32:11.002 "trsvcid": "4420", 00:32:11.002 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:11.002 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:11.002 "prchk_reftag": false, 00:32:11.002 "prchk_guard": false, 00:32:11.002 "hdgst": false, 00:32:11.002 "ddgst": false, 00:32:11.002 "allow_unrecognized_csi": false, 00:32:11.002 "method": "bdev_nvme_attach_controller", 00:32:11.002 "req_id": 1 00:32:11.002 } 00:32:11.002 Got JSON-RPC error response 00:32:11.002 response: 00:32:11.002 { 00:32:11.002 "code": -5, 00:32:11.002 "message": "Input/output error" 00:32:11.002 } 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.002 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.002 request: 00:32:11.002 { 00:32:11.002 "name": "nvme0", 00:32:11.002 "trtype": "tcp", 00:32:11.002 "traddr": "10.0.0.1", 00:32:11.002 "adrfam": "ipv4", 00:32:11.002 "trsvcid": "4420", 00:32:11.002 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:11.002 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:11.002 "prchk_reftag": false, 00:32:11.002 "prchk_guard": false, 00:32:11.002 "hdgst": false, 00:32:11.002 "ddgst": false, 00:32:11.002 "dhchap_key": "key2", 00:32:11.002 "allow_unrecognized_csi": false, 00:32:11.002 "method": "bdev_nvme_attach_controller", 00:32:11.002 "req_id": 1 00:32:11.002 } 00:32:11.003 Got JSON-RPC error response 00:32:11.003 response: 00:32:11.003 { 00:32:11.003 "code": -5, 00:32:11.003 "message": "Input/output error" 00:32:11.003 } 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.003 23:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.263 request: 00:32:11.263 { 00:32:11.263 "name": "nvme0", 00:32:11.263 "trtype": "tcp", 00:32:11.263 "traddr": "10.0.0.1", 00:32:11.263 "adrfam": "ipv4", 00:32:11.263 "trsvcid": "4420", 00:32:11.263 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:11.263 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:11.263 "prchk_reftag": false, 00:32:11.263 "prchk_guard": false, 00:32:11.263 "hdgst": false, 00:32:11.263 "ddgst": false, 00:32:11.263 "dhchap_key": "key1", 00:32:11.263 "dhchap_ctrlr_key": "ckey2", 00:32:11.263 "allow_unrecognized_csi": false, 00:32:11.263 "method": "bdev_nvme_attach_controller", 00:32:11.263 "req_id": 1 00:32:11.263 } 00:32:11.263 Got JSON-RPC error response 00:32:11.263 response: 00:32:11.263 { 00:32:11.263 "code": -5, 00:32:11.263 "message": "Input/output error" 00:32:11.263 } 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.263 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.264 nvme0n1 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.264 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.525 request: 00:32:11.525 { 00:32:11.525 "name": "nvme0", 00:32:11.525 "dhchap_key": "key1", 00:32:11.525 "dhchap_ctrlr_key": "ckey2", 00:32:11.525 "method": "bdev_nvme_set_keys", 00:32:11.525 "req_id": 1 00:32:11.525 } 00:32:11.525 Got JSON-RPC error response 00:32:11.525 response: 00:32:11.525 { 00:32:11.525 "code": -13, 00:32:11.525 "message": "Permission denied" 00:32:11.525 } 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:11.525 23:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:12.908 23:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIzOTZjMDZhNzM1NjcyMDdkNmE3MDhiMDc2MWRhOTkzMmExZjMzYzIyNTAzNzRmSo4QIg==: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTkwOTY3MWMyZGQ0NWY3NDhlOWJhMTAyY2Q4NDBmMDE5ZDlkOGEwZGNhYzJjYmU4W+j2oA==: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.849 nvme0n1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmU2YzE1OTBiNzc4ODJiYzBmNDFhZmI5NDhkODg5NzUSW6xJ: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: ]] 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWIyMTc0NTdhZjJkZDYzN2RmYWQ0NTIzNWQxMjVlNTatDvOt: 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.849 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.850 request: 00:32:13.850 { 00:32:13.850 "name": "nvme0", 00:32:13.850 "dhchap_key": "key2", 00:32:13.850 "dhchap_ctrlr_key": "ckey1", 00:32:13.850 "method": "bdev_nvme_set_keys", 00:32:13.850 "req_id": 1 00:32:13.850 } 00:32:13.850 Got JSON-RPC error response 00:32:13.850 response: 00:32:13.850 { 00:32:13.850 "code": -13, 00:32:13.850 "message": "Permission denied" 00:32:13.850 } 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.850 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.110 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:14.110 23:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.051 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.052 rmmod nvme_tcp 00:32:15.052 rmmod nvme_fabrics 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 862694 ']' 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 862694 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 862694 ']' 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 862694 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.052 23:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 862694 00:32:15.052 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:15.052 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:15.052 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 862694' 00:32:15.052 killing process with pid 862694 00:32:15.052 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 862694 00:32:15.052 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 862694 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.312 23:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.225 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:32:17.486 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:32:17.487 23:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:21.692 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:21.692 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:21.692 23:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Bpd /tmp/spdk.key-null.NaZ /tmp/spdk.key-sha256.EeM /tmp/spdk.key-sha384.9OE /tmp/spdk.key-sha512.7i6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:21.692 23:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:24.993 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:24.993 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:24.993 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:25.255 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:25.255 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:25.255 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:25.255 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:25.516 00:32:25.516 real 1m1.489s 00:32:25.516 user 0m55.144s 00:32:25.516 sys 0m16.410s 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.516 ************************************ 00:32:25.516 END TEST nvmf_auth_host 00:32:25.516 ************************************ 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.516 ************************************ 00:32:25.516 START TEST nvmf_digest 00:32:25.516 ************************************ 00:32:25.516 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:25.779 * Looking for test storage... 00:32:25.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:25.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.779 --rc genhtml_branch_coverage=1 00:32:25.779 --rc genhtml_function_coverage=1 00:32:25.779 --rc genhtml_legend=1 00:32:25.779 --rc geninfo_all_blocks=1 00:32:25.779 --rc geninfo_unexecuted_blocks=1 00:32:25.779 00:32:25.779 ' 00:32:25.779 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:25.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.779 --rc genhtml_branch_coverage=1 00:32:25.779 --rc genhtml_function_coverage=1 00:32:25.779 --rc genhtml_legend=1 00:32:25.779 --rc geninfo_all_blocks=1 00:32:25.780 --rc geninfo_unexecuted_blocks=1 00:32:25.780 00:32:25.780 ' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:25.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.780 --rc genhtml_branch_coverage=1 00:32:25.780 --rc genhtml_function_coverage=1 00:32:25.780 --rc genhtml_legend=1 00:32:25.780 --rc geninfo_all_blocks=1 00:32:25.780 --rc geninfo_unexecuted_blocks=1 00:32:25.780 00:32:25.780 ' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:25.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.780 --rc genhtml_branch_coverage=1 00:32:25.780 --rc genhtml_function_coverage=1 00:32:25.780 --rc genhtml_legend=1 00:32:25.780 --rc geninfo_all_blocks=1 00:32:25.780 --rc geninfo_unexecuted_blocks=1 00:32:25.780 00:32:25.780 ' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.780 23:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.924 23:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:33.924 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:33.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:33.924 Found net devices under 0000:31:00.0: cvl_0_0 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:33.924 Found net devices under 0000:31:00.1: cvl_0_1 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:32:33.924 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:32:33.925 00:32:33.925 --- 10.0.0.2 ping statistics --- 00:32:33.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.925 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:32:33.925 00:32:33.925 --- 10.0.0.1 ping statistics --- 00:32:33.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.925 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.925 ************************************ 00:32:33.925 START TEST nvmf_digest_clean 00:32:33.925 ************************************ 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=880688 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 880688 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 880688 ']' 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.925 23:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.925 [2024-09-30 23:01:00.505953] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:33.925 [2024-09-30 23:01:00.506040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.925 [2024-09-30 23:01:00.595760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.925 [2024-09-30 23:01:00.689678] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.925 [2024-09-30 23:01:00.689740] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.925 [2024-09-30 23:01:00.689748] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.925 [2024-09-30 23:01:00.689755] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.925 [2024-09-30 23:01:00.689761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.925 [2024-09-30 23:01:00.689789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 null0 00:32:34.499 [2024-09-30 23:01:01.443727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.499 [2024-09-30 23:01:01.468057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=880730 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 880730 /var/tmp/bperf.sock 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 880730 ']' 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:34.499 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.500 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:34.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:34.500 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.500 23:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:34.762 [2024-09-30 23:01:01.527521] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:34.762 [2024-09-30 23:01:01.527586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880730 ] 00:32:34.762 [2024-09-30 23:01:01.609521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.762 [2024-09-30 23:01:01.705954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.334 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.334 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:35.334 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:35.334 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:35.334 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:35.595 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.595 23:01:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.166 nvme0n1 00:32:36.166 23:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:36.166 23:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:36.166 Running I/O for 2 seconds... 00:32:38.494 18944.00 IOPS, 74.00 MiB/s 19700.50 IOPS, 76.96 MiB/s 00:32:38.494 Latency(us) 00:32:38.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.494 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:38.494 nvme0n1 : 2.00 19729.85 77.07 0.00 0.00 6481.39 2375.68 23811.41 00:32:38.494 =================================================================================================================== 00:32:38.494 Total : 19729.85 77.07 0.00 0.00 6481.39 2375.68 23811.41 00:32:38.494 { 00:32:38.494 "results": [ 00:32:38.494 { 00:32:38.494 "job": "nvme0n1", 00:32:38.494 "core_mask": "0x2", 00:32:38.494 "workload": "randread", 00:32:38.494 "status": "finished", 00:32:38.494 "queue_depth": 128, 00:32:38.494 "io_size": 4096, 00:32:38.494 "runtime": 2.003512, 00:32:38.494 "iops": 19729.85437571624, 00:32:38.494 "mibps": 77.06974365514156, 00:32:38.494 "io_failed": 0, 00:32:38.494 "io_timeout": 0, 00:32:38.494 "avg_latency_us": 6481.385061094386, 00:32:38.494 "min_latency_us": 2375.68, 00:32:38.494 "max_latency_us": 23811.413333333334 00:32:38.494 } 00:32:38.494 ], 00:32:38.494 "core_count": 1 00:32:38.494 } 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:38.494 | select(.opcode=="crc32c") 00:32:38.494 | "\(.module_name) \(.executed)"' 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 880730 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 880730 ']' 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 880730 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880730 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880730' 00:32:38.494 killing process with pid 880730 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 880730 00:32:38.494 Received shutdown signal, test time was about 2.000000 seconds 00:32:38.494 00:32:38.494 Latency(us) 00:32:38.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.494 =================================================================================================================== 00:32:38.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.494 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 880730 00:32:38.755 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:38.755 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:38.755 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:38.755 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:38.755 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=881576 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 881576 /var/tmp/bperf.sock 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 881576 ']' 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:38.756 23:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.756 [2024-09-30 23:01:05.561053] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:38.756 [2024-09-30 23:01:05.561113] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881576 ] 00:32:38.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:38.756 Zero copy mechanism will not be used. 00:32:38.756 [2024-09-30 23:01:05.636331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.756 [2024-09-30 23:01:05.689697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.698 23:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.269 nvme0n1 00:32:40.269 23:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:40.269 23:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:40.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:40.269 Zero copy mechanism will not be used. 00:32:40.269 Running I/O for 2 seconds... 00:32:42.149 4173.00 IOPS, 521.62 MiB/s 3689.50 IOPS, 461.19 MiB/s 00:32:42.149 Latency(us) 00:32:42.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.149 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:42.149 nvme0n1 : 2.00 3689.79 461.22 0.00 0.00 4333.35 638.29 8847.36 00:32:42.149 =================================================================================================================== 00:32:42.149 Total : 3689.79 461.22 0.00 0.00 4333.35 638.29 8847.36 00:32:42.149 { 00:32:42.149 "results": [ 00:32:42.149 { 00:32:42.149 "job": "nvme0n1", 00:32:42.149 "core_mask": "0x2", 00:32:42.149 "workload": "randread", 00:32:42.149 "status": "finished", 00:32:42.149 "queue_depth": 16, 00:32:42.149 "io_size": 131072, 00:32:42.149 "runtime": 2.004179, 00:32:42.149 "iops": 3689.790183411761, 00:32:42.149 "mibps": 461.22377292647013, 00:32:42.149 "io_failed": 0, 00:32:42.149 "io_timeout": 0, 00:32:42.149 "avg_latency_us": 4333.348983547442, 00:32:42.149 "min_latency_us": 638.2933333333333, 00:32:42.149 "max_latency_us": 8847.36 00:32:42.149 } 00:32:42.149 ], 00:32:42.149 "core_count": 1 00:32:42.149 } 00:32:42.149 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:42.149 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:42.149 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:42.149 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:42.149 | select(.opcode=="crc32c") 00:32:42.149 | "\(.module_name) \(.executed)"' 00:32:42.149 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 881576 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 881576 ']' 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 881576 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 881576 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 881576' 00:32:42.410 killing process with pid 881576 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 881576 00:32:42.410 Received shutdown signal, test time was about 2.000000 seconds 00:32:42.410 00:32:42.410 Latency(us) 00:32:42.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.410 =================================================================================================================== 00:32:42.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.410 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 881576 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=882406 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 882406 /var/tmp/bperf.sock 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 882406 ']' 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:42.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.671 23:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:42.671 [2024-09-30 23:01:09.549125] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:42.671 [2024-09-30 23:01:09.549186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882406 ] 00:32:42.671 [2024-09-30 23:01:09.624496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.671 [2024-09-30 23:01:09.677781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:43.611 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:44.184 nvme0n1 00:32:44.184 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:44.184 23:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:44.184 Running I/O for 2 seconds... 00:32:46.068 29578.00 IOPS, 115.54 MiB/s 29613.00 IOPS, 115.68 MiB/s 00:32:46.069 Latency(us) 00:32:46.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.069 nvme0n1 : 2.01 29616.07 115.69 0.00 0.00 4314.77 1993.39 7755.09 00:32:46.069 =================================================================================================================== 00:32:46.069 Total : 29616.07 115.69 0.00 0.00 4314.77 1993.39 7755.09 00:32:46.069 { 00:32:46.069 "results": [ 00:32:46.069 { 00:32:46.069 "job": "nvme0n1", 00:32:46.069 "core_mask": "0x2", 00:32:46.069 "workload": "randwrite", 00:32:46.069 "status": "finished", 00:32:46.069 "queue_depth": 128, 00:32:46.069 "io_size": 4096, 00:32:46.069 "runtime": 2.005465, 00:32:46.069 "iops": 29616.074077583005, 00:32:46.069 "mibps": 115.68778936555861, 00:32:46.069 "io_failed": 0, 00:32:46.069 "io_timeout": 0, 00:32:46.069 "avg_latency_us": 4314.771943293936, 00:32:46.069 "min_latency_us": 1993.3866666666668, 00:32:46.069 "max_latency_us": 7755.093333333333 00:32:46.069 } 00:32:46.069 ], 00:32:46.069 "core_count": 1 00:32:46.069 } 00:32:46.069 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:46.069 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:46.069 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:46.069 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:46.069 | select(.opcode=="crc32c") 00:32:46.069 | "\(.module_name) \(.executed)"' 00:32:46.069 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 882406 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 882406 ']' 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 882406 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882406 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882406' 00:32:46.331 killing process with pid 882406 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 882406 00:32:46.331 Received shutdown signal, test time was about 2.000000 seconds 00:32:46.331 00:32:46.331 Latency(us) 00:32:46.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.331 =================================================================================================================== 00:32:46.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.331 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 882406 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=883096 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 883096 /var/tmp/bperf.sock 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 883096 ']' 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.592 23:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.592 [2024-09-30 23:01:13.501475] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:46.592 [2024-09-30 23:01:13.501533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883096 ] 00:32:46.592 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:46.592 Zero copy mechanism will not be used. 00:32:46.592 [2024-09-30 23:01:13.576493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.853 [2024-09-30 23:01:13.629059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.424 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.424 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:47.424 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:47.424 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:47.424 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:47.685 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.685 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.945 nvme0n1 00:32:47.945 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:47.945 23:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:48.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:48.206 Zero copy mechanism will not be used. 00:32:48.206 Running I/O for 2 seconds... 00:32:50.089 5895.00 IOPS, 736.88 MiB/s 6082.00 IOPS, 760.25 MiB/s 00:32:50.089 Latency(us) 00:32:50.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.089 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:50.089 nvme0n1 : 2.00 6083.67 760.46 0.00 0.00 2626.48 1092.27 8738.13 00:32:50.089 =================================================================================================================== 00:32:50.089 Total : 6083.67 760.46 0.00 0.00 2626.48 1092.27 8738.13 00:32:50.089 { 00:32:50.089 "results": [ 00:32:50.089 { 00:32:50.089 "job": "nvme0n1", 00:32:50.089 "core_mask": "0x2", 00:32:50.089 "workload": "randwrite", 00:32:50.089 "status": "finished", 00:32:50.089 "queue_depth": 16, 00:32:50.089 "io_size": 131072, 00:32:50.089 "runtime": 2.002738, 00:32:50.089 "iops": 6083.671453779775, 00:32:50.089 "mibps": 760.4589317224719, 00:32:50.089 "io_failed": 0, 00:32:50.089 "io_timeout": 0, 00:32:50.089 "avg_latency_us": 2626.483239220836, 00:32:50.089 "min_latency_us": 1092.2666666666667, 00:32:50.089 "max_latency_us": 8738.133333333333 00:32:50.089 } 00:32:50.089 ], 00:32:50.089 "core_count": 1 00:32:50.089 } 00:32:50.089 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:50.089 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:50.089 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:50.089 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:50.089 | select(.opcode=="crc32c") 00:32:50.089 | "\(.module_name) \(.executed)"' 00:32:50.089 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 883096 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 883096 ']' 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 883096 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 883096 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 883096' 00:32:50.349 killing process with pid 883096 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 883096 00:32:50.349 Received shutdown signal, test time was about 2.000000 seconds 00:32:50.349 00:32:50.349 Latency(us) 00:32:50.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.349 =================================================================================================================== 00:32:50.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:50.349 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 883096 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 880688 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 880688 ']' 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 880688 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880688 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880688' 00:32:50.609 killing process with pid 880688 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 880688 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 880688 00:32:50.609 00:32:50.609 real 0m17.170s 00:32:50.609 user 0m33.861s 00:32:50.609 sys 0m3.800s 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:50.609 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.609 ************************************ 00:32:50.609 END TEST nvmf_digest_clean 00:32:50.609 ************************************ 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.870 ************************************ 00:32:50.870 START TEST nvmf_digest_error 00:32:50.870 ************************************ 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=883911 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 883911 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 883911 ']' 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.870 23:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.870 [2024-09-30 23:01:17.740900] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:50.870 [2024-09-30 23:01:17.740961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.870 [2024-09-30 23:01:17.831573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.131 [2024-09-30 23:01:17.902337] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.131 [2024-09-30 23:01:17.902381] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.131 [2024-09-30 23:01:17.902387] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.131 [2024-09-30 23:01:17.902392] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.131 [2024-09-30 23:01:17.902397] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.131 [2024-09-30 23:01:17.902419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.703 [2024-09-30 23:01:18.596348] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.703 null0 00:32:51.703 [2024-09-30 23:01:18.674787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.703 [2024-09-30 23:01:18.698996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=884155 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 884155 /var/tmp/bperf.sock 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 884155 ']' 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:51.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:51.703 23:01:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.964 [2024-09-30 23:01:18.765667] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:51.964 [2024-09-30 23:01:18.765732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884155 ] 00:32:51.964 [2024-09-30 23:01:18.842361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.964 [2024-09-30 23:01:18.896146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.535 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:52.535 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:52.535 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:52.535 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:52.795 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.055 nvme0n1 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:53.055 23:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.316 Running I/O for 2 seconds... 00:32:53.316 [2024-09-30 23:01:20.104031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.104074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.112037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.112063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.123633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.123652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.123659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.133062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.133084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.133091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.141707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.141724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.141731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.152487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.152504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.152510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.163977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.163995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.164002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.172877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.172898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.181557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.181581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.190611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.190627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.190634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.198675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.198692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.198699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.208306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.208330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.218184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.316 [2024-09-30 23:01:20.218201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.316 [2024-09-30 23:01:20.218208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.316 [2024-09-30 23:01:20.229706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.229723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.229730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.239908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.239925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.239931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.251463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.251480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.251487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.260021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.260038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.260045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.268793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.268810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.268817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.277279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.277295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.277302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.286870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.286887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.286899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.295791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.295811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.295818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.304492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.304509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.304516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.313461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.313478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.313485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.317 [2024-09-30 23:01:20.322658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.317 [2024-09-30 23:01:20.322674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.317 [2024-09-30 23:01:20.322680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.577 [2024-09-30 23:01:20.331657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.331675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.331682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.340108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.340125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.340132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.350631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.350648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.350654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.362027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.362045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.362051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.374517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.374534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.374541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.382096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.382113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.382119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.393211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.393227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.393234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.403538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.403555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.403561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.411882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.411903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.411910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.420636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.420653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.420659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.429814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.429836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.439048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.439064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.447185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.447201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.447208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.458024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.458051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.466847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.466871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.476151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.476168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.476174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.483997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.484014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.484021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.494279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.494296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.494303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.501823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.501840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.501846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.513325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.513341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.513348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.522324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.522340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.522346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.531310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.531327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.531333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.539519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.539542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.539549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.548744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.548761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.548767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.558870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.558887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.558898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.567109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.578 [2024-09-30 23:01:20.567132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.578 [2024-09-30 23:01:20.578439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.578 [2024-09-30 23:01:20.578456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.579 [2024-09-30 23:01:20.578463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.579 [2024-09-30 23:01:20.587303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.579 [2024-09-30 23:01:20.587321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.579 [2024-09-30 23:01:20.587327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.595912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.595929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.595936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.604640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.604656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.613751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.613768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.613774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.623444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.623461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.623467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.632659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.632675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.632682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.640762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.640779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.640785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.649932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.649948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.649955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.659477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.659494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.659500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.667551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.667568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.677942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.677959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.677966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.686396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.686413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.695421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.695438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.704924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.704947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.714057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.714074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.714080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.722156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.722173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.722179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.731840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.731857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.731863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.739948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.739965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.739971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.749123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.749140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.758739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.758755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.758762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.767511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.767528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.767534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.776417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.776440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.786136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.786153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.786159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.793527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.793544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.793550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.803556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.840 [2024-09-30 23:01:20.803573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.840 [2024-09-30 23:01:20.803580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.840 [2024-09-30 23:01:20.811989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.841 [2024-09-30 23:01:20.812005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.841 [2024-09-30 23:01:20.812011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.841 [2024-09-30 23:01:20.821910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.841 [2024-09-30 23:01:20.821927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.841 [2024-09-30 23:01:20.821933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.841 [2024-09-30 23:01:20.831344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.841 [2024-09-30 23:01:20.831361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.841 [2024-09-30 23:01:20.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.841 [2024-09-30 23:01:20.840810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.841 [2024-09-30 23:01:20.840826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.841 [2024-09-30 23:01:20.840833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.841 [2024-09-30 23:01:20.848612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:53.841 [2024-09-30 23:01:20.848629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.841 [2024-09-30 23:01:20.848639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.101 [2024-09-30 23:01:20.858598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.101 [2024-09-30 23:01:20.858616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.101 [2024-09-30 23:01:20.858622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.101 [2024-09-30 23:01:20.867081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.101 [2024-09-30 23:01:20.867098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.101 [2024-09-30 23:01:20.867105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.101 [2024-09-30 23:01:20.875751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.101 [2024-09-30 23:01:20.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.101 [2024-09-30 23:01:20.875774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.101 [2024-09-30 23:01:20.886321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.101 [2024-09-30 23:01:20.886337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.101 [2024-09-30 23:01:20.886344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.897957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.897974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.897980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.909398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.909415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.909421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.919533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.919550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.919556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.927409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.927427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.927433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.938222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.938242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.938249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.947801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.947819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.958071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.958088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.958094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.966112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.966130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.966136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.977789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.977806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.977813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.987445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.987462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.987469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:20.997943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:20.997960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:20.997967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.006587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.006605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.006611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.016627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.016645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.026222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.026239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.035070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.035087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.035093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.043172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.043188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.043194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.053598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.053615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.053622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.064337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.064354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.064360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.072994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.073011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.073017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.082235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.082252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 26869.00 IOPS, 104.96 MiB/s [2024-09-30 23:01:21.093504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.093521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.093528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.103737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.103757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.103764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.102 [2024-09-30 23:01:21.111640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.102 [2024-09-30 23:01:21.111658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.102 [2024-09-30 23:01:21.111664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.363 [2024-09-30 23:01:21.120679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.363 [2024-09-30 23:01:21.120696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.120702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.130876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.130897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.130904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.142626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.142644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.142650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.153110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.153127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.153134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.161273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.161290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.161297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.171188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.171205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.171211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.179623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.179640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.188876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.188898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.188904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.197858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.197875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.206776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.206793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.206800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.215622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.215638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.215644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.223717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.223734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.223741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.233502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.233519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.233525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.242317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.242335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.242341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.251789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.251812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.260044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.260061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.260070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.269017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.269034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.269040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.277944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.277962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.277968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.286736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.286753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.286759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.295821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.295840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.295846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.305829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.305846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.305853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.314420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.314437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.314443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.322646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.322663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.322669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.331582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.331599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.331606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.340654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.340675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.349627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.349643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.349651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.357804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.364 [2024-09-30 23:01:21.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.364 [2024-09-30 23:01:21.357828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.364 [2024-09-30 23:01:21.367028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.365 [2024-09-30 23:01:21.367044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.365 [2024-09-30 23:01:21.367051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.365 [2024-09-30 23:01:21.376752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.365 [2024-09-30 23:01:21.376769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.365 [2024-09-30 23:01:21.376775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.385658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.385675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.385682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.393609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.393625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.393631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.402677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.402693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.402700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.411738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.411755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.411761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.420374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.420390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.420397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.430047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.430064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.626 [2024-09-30 23:01:21.430070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.626 [2024-09-30 23:01:21.438489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.626 [2024-09-30 23:01:21.438506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.438512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.447478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.447494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.447501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.455993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.456010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.456016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.466063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.466081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.466087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.476504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.476520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.476527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.484367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.484385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.484391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.493712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.493729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.493738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.502588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.502606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.502612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.512061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.512078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.512084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.519744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.519767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.530622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.530638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.530645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.539855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.539872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.539879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.548000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.548017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.548024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.557109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.557126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.557133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.565697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.565714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.565720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.575181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.575198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.575205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.584114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.584131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.584138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.592446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.592463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.592469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.602223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.602240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.602246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.611196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.611213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.611219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.620752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.620769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.620775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.628730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.628747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.628754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.627 [2024-09-30 23:01:21.637355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.627 [2024-09-30 23:01:21.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.627 [2024-09-30 23:01:21.637378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.648014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.648031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.655818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.655835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.655841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.666918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.666935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.666941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.677298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.677315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.677322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.684957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.684974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.684980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.695441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.695458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.695464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.704710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.704727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.704734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.714730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.714747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.714753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.723555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.723572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.723578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.731121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.731144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.731151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.740870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.740897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.749945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.749961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.749968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.757832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.757849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.757855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.769163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.769180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.769186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.780811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.780835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.792190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.792206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.792213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.800053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.800070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.800076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.811090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.811107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.811113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.822575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.822592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.822598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.834130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.834147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.834154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.845601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.845618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.845624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.856712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.856729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.856735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.889 [2024-09-30 23:01:21.869146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.889 [2024-09-30 23:01:21.869164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.889 [2024-09-30 23:01:21.869171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.890 [2024-09-30 23:01:21.880839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.890 [2024-09-30 23:01:21.880856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.890 [2024-09-30 23:01:21.880863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.890 [2024-09-30 23:01:21.891657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.890 [2024-09-30 23:01:21.891674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.890 [2024-09-30 23:01:21.891681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.890 [2024-09-30 23:01:21.898977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:54.890 [2024-09-30 23:01:21.898994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.890 [2024-09-30 23:01:21.899000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.909461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.909478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.918157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.918174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.927614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.927631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.927638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.938202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.938219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.938226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.948401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.948418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.948424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.957573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.957590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.957596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.965208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.965225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.965232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.975476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.975494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.975500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.985799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.985816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.985822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:21.994944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:21.994961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:21.994968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.003839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.003856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.003863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.011437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.011454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.011461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.022198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.022215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.032870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.032887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.032898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.042781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.042797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.042804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.050641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.050658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.050665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.062503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.151 [2024-09-30 23:01:22.062521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.151 [2024-09-30 23:01:22.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.151 [2024-09-30 23:01:22.074339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.152 [2024-09-30 23:01:22.074357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.152 [2024-09-30 23:01:22.074366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.152 [2024-09-30 23:01:22.085624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x120bc30) 00:32:55.152 [2024-09-30 23:01:22.085641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.152 [2024-09-30 23:01:22.085647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.152 26946.00 IOPS, 105.26 MiB/s 00:32:55.152 Latency(us) 00:32:55.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.152 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:55.152 nvme0n1 : 2.01 26943.91 105.25 0.00 0.00 4745.34 2170.88 17803.95 00:32:55.152 =================================================================================================================== 00:32:55.152 Total : 26943.91 105.25 0.00 0.00 4745.34 2170.88 17803.95 00:32:55.152 { 00:32:55.152 "results": [ 00:32:55.152 { 00:32:55.152 "job": "nvme0n1", 00:32:55.152 "core_mask": "0x2", 00:32:55.152 "workload": "randread", 00:32:55.152 "status": "finished", 00:32:55.152 "queue_depth": 128, 00:32:55.152 "io_size": 4096, 00:32:55.152 "runtime": 2.005685, 00:32:55.152 "iops": 26943.911930338014, 00:32:55.152 "mibps": 105.24965597788287, 00:32:55.152 "io_failed": 0, 00:32:55.152 "io_timeout": 0, 00:32:55.152 "avg_latency_us": 4745.344458713446, 00:32:55.152 "min_latency_us": 2170.88, 00:32:55.152 "max_latency_us": 17803.946666666667 00:32:55.152 } 00:32:55.152 ], 00:32:55.152 "core_count": 1 00:32:55.152 } 00:32:55.152 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:55.152 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:55.152 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:55.152 | .driver_specific 00:32:55.152 | .nvme_error 00:32:55.152 | .status_code 00:32:55.152 | .command_transient_transport_error' 00:32:55.152 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 884155 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 884155 ']' 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 884155 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 884155 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 884155' 00:32:55.412 killing process with pid 884155 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 884155 00:32:55.412 Received shutdown signal, test time was about 2.000000 seconds 00:32:55.412 00:32:55.412 Latency(us) 00:32:55.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.412 =================================================================================================================== 00:32:55.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:55.412 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 884155 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=884845 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 884845 /var/tmp/bperf.sock 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 884845 ']' 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:55.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.674 23:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.674 [2024-09-30 23:01:22.548056] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:55.674 [2024-09-30 23:01:22.548116] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884845 ] 00:32:55.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:55.674 Zero copy mechanism will not be used. 00:32:55.674 [2024-09-30 23:01:22.623623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.674 [2024-09-30 23:01:22.676726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.615 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.615 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:56.615 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.616 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.876 nvme0n1 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:56.876 23:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:57.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.143 Zero copy mechanism will not be used. 00:32:57.143 Running I/O for 2 seconds... 00:32:57.143 [2024-09-30 23:01:23.913485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.913518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.913527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.918673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.918699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.918706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.926844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.926866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.926873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.931348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.931366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.931373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.938712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.938731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.938738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.948477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.948496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.948503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.953288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.953311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.953317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.958539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.958558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.958564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.963200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.963218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.963224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.143 [2024-09-30 23:01:23.970966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.143 [2024-09-30 23:01:23.970984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.143 [2024-09-30 23:01:23.970991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:23.978622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:23.978641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:23.978647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:23.984081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:23.984099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:23.984106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:23.992362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:23.992380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:23.992387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:23.997388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:23.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:23.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.007840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.007858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.007865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.015955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.015980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.024214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.024235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.024242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.028829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.028848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.028855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.033714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.033732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.033739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.038284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.038302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.038309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.042922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.042941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.042947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.051108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.051126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.051132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.061046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.061065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.061072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.071887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.071911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.071924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.081718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.081737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.081743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.090349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.090368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.090375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.097603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.097621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.097627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.106997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.107015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.107022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.114135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.114153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.120924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.120942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.120949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.125658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.125676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.125683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.130981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.130999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.131005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.142064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.142085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.142091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.146609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.146627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.144 [2024-09-30 23:01:24.151039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.144 [2024-09-30 23:01:24.151057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.144 [2024-09-30 23:01:24.151063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.472 [2024-09-30 23:01:24.160172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.472 [2024-09-30 23:01:24.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.160197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.168944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.168962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.178705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.178723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.178731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.184000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.184018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.184025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.193307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.193324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.193330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.198225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.198250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.204509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.204526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.204532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.210613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.210631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.215327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.215350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.223290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.223307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.228821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.228839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.228846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.233917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.233936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.233942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.240491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.240510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.240516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.247026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.247043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.257133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.257151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.257160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.264829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.264847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.264853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.272282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.272300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.272306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.281647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.281664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.281670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.286155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.286172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.286178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.294835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.294851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.294858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.299348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.299366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.299372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.303761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.303779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.303786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.308339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.308357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.308364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.314327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.314348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.314354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.320596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.320621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.325744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.325764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.332646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.332665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.473 [2024-09-30 23:01:24.332671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.473 [2024-09-30 23:01:24.343369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.473 [2024-09-30 23:01:24.343389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.343395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.355291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.355309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.366637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.366655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.366662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.371601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.371620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.371627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.379582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.379601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.386909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.386927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.386933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.395668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.395686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.405417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.405435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.405441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.415297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.415316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.415322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.425383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.425408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.434337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.434356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.434362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.440963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.440982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.450803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.450824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.450834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.458033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.458059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.458065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.465675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.465695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.465702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.474 [2024-09-30 23:01:24.474434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.474 [2024-09-30 23:01:24.474453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.474 [2024-09-30 23:01:24.474460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.772 [2024-09-30 23:01:24.483788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.772 [2024-09-30 23:01:24.483808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.772 [2024-09-30 23:01:24.483814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.772 [2024-09-30 23:01:24.494596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.494615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.494621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.503649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.503667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.503673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.512696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.512724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.521960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.521979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.521986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.532079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.532098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.532104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.537063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.537082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.545868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.545886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.545892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.553007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.560758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.560776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.560782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.567963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.567981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.567988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.577792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.577810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.577817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.585347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.585365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.585371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.593833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.593850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.593856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.602676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.602695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.602704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.611408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.611426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.619757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.619776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.619784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.629221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.629239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.629246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.638886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.638911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.638917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.642261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.642280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.642286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.649064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.649082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.649088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.657297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.657316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.657322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.666143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.666161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.666167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.675595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.675617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.675623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.683919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.683937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.683943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.690736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.690754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.690761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.773 [2024-09-30 23:01:24.699539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.773 [2024-09-30 23:01:24.699558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.773 [2024-09-30 23:01:24.699565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.708117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.708136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.708142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.714097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.714115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.714122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.721099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.721117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.721123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.731446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.731465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.731471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.741366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.741391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.751614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.751633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.751639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.759786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.759804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.759811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.768841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.768859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.768865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.774 [2024-09-30 23:01:24.774957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:57.774 [2024-09-30 23:01:24.774975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.774 [2024-09-30 23:01:24.774981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.783346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.783365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.791674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.791694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.791701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.801857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.801876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.801883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.811043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.811060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.811067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.819519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.819538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.819548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.830425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.830443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.830450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.838852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.838871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.838878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.848533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.848552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.848558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.858579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.858598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.858604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.869150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.869169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.869175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.878982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.879001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.879007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.887674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.887692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.887698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.894070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.894087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.894093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.042 3940.00 IOPS, 492.50 MiB/s [2024-09-30 23:01:24.903995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.904016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.904023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.911593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.911610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.911616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.922584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.922602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.042 [2024-09-30 23:01:24.922609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.042 [2024-09-30 23:01:24.930939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.042 [2024-09-30 23:01:24.930958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.930965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.936673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.936691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.936697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.947713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.947736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.959159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.959177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.959183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.970948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.970966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.970972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.983093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.983112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.983118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:24.995266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:24.995285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:24.995291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.008423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.008442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.008449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.018806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.018825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.018832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.026151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.026170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.026176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.035845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.035864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.035870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.044498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.044518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.044524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.043 [2024-09-30 23:01:25.054069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.043 [2024-09-30 23:01:25.054089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.043 [2024-09-30 23:01:25.054095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.061955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.061974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.061981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.072139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.072158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.072168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.084502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.084526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.095892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.095916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.095922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.107282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.107301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.107308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.114125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.114144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.114150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.121207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.121226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.132396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.132415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.132421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.142094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.142112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.142118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.150834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.150853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.150860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.159701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.159719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.159726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.167833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.167851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.167858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.175298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.175316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.175323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.186522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.186540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.192349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.192367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.192374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.199925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.199949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.211432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.211450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.211456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.220377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.220396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.305 [2024-09-30 23:01:25.220402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.305 [2024-09-30 23:01:25.228272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.305 [2024-09-30 23:01:25.228290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.228300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.236316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.236334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.245002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.245020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.245026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.256359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.256377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.256384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.268835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.268853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.280561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.280580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.280586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.291980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.291998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.292004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.303078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.303097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.303103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.306 [2024-09-30 23:01:25.314566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.306 [2024-09-30 23:01:25.314584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.306 [2024-09-30 23:01:25.314591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.323680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.323702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.323709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.333799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.333818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.333824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.340422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.340440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.340446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.348959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.348984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.359768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.359787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.359793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.368995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.369013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.369020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.380008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.380026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.380033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.390474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.390499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.399712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.399730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.399736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.409642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.419503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.419522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.419528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.428217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.428235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.428241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.436492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.436511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.436517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.446303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.446322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.446330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.455601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.455620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.455627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.465707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.465727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.465734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.476773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.476791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.476797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.486655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.486673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.486683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.494752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.494770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.494777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.504745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.504763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.504769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.512955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.512973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.567 [2024-09-30 23:01:25.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.567 [2024-09-30 23:01:25.520520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.567 [2024-09-30 23:01:25.520538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.520544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.528605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.528624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.528630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.535188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.535207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.535214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.545208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.545227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.545233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.555652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.555671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.555677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.567519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.567542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.567548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.568 [2024-09-30 23:01:25.578911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.568 [2024-09-30 23:01:25.578929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.568 [2024-09-30 23:01:25.578936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.588180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.588199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.588205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.599767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.599785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.599792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.611771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.611789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.611796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.623566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.623584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.623591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.630921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.630939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.630946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.638497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.638516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.638522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.649560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.649578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.649584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.657285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.657304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.657310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.665695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.665713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.665719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.674531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.674550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.674557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.684936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.684954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.684961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.694616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.694634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.705795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.705813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.705819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.716026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.716044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.829 [2024-09-30 23:01:25.716051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.829 [2024-09-30 23:01:25.724791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.829 [2024-09-30 23:01:25.724810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.724816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.734998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.735017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.735027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.745500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.745518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.745524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.753145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.753164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.753170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.762019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.762037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.762044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.770964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.770982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.770989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.781259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.781278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.790757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.790775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.790782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.801149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.801168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.801175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.809357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.809376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.809382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.818577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.818598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.818605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.828599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.828618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.828624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.830 [2024-09-30 23:01:25.838135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:58.830 [2024-09-30 23:01:25.838154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.830 [2024-09-30 23:01:25.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.847313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.847332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.858558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.858576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.858583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.864355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.864374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.864380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.874597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.874616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.874623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.884184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.884202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.884209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.091 [2024-09-30 23:01:25.893165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.893184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.893191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.091 3594.00 IOPS, 449.25 MiB/s [2024-09-30 23:01:25.902144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x582840) 00:32:59.091 [2024-09-30 23:01:25.902162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.091 [2024-09-30 23:01:25.902169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.091 00:32:59.091 Latency(us) 00:32:59.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:59.091 nvme0n1 : 2.00 3597.34 449.67 0.00 0.00 4443.76 655.36 12834.13 00:32:59.091 =================================================================================================================== 00:32:59.091 Total : 3597.34 449.67 0.00 0.00 4443.76 655.36 12834.13 00:32:59.091 { 00:32:59.091 "results": [ 00:32:59.091 { 00:32:59.091 "job": "nvme0n1", 00:32:59.091 "core_mask": "0x2", 00:32:59.091 "workload": "randread", 00:32:59.091 "status": "finished", 00:32:59.091 "queue_depth": 16, 00:32:59.091 "io_size": 131072, 00:32:59.091 "runtime": 2.002593, 00:32:59.091 "iops": 3597.33605380624, 00:32:59.091 "mibps": 449.66700672578, 00:32:59.091 "io_failed": 0, 00:32:59.091 "io_timeout": 0, 00:32:59.091 "avg_latency_us": 4443.758445308162, 00:32:59.091 "min_latency_us": 655.36, 00:32:59.091 "max_latency_us": 12834.133333333333 00:32:59.091 } 00:32:59.091 ], 00:32:59.091 "core_count": 1 00:32:59.091 } 00:32:59.091 23:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:59.091 23:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:59.091 | .driver_specific 00:32:59.091 | .nvme_error 00:32:59.091 | .status_code 00:32:59.091 | .command_transient_transport_error' 00:32:59.091 23:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:59.091 23:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:59.091 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:32:59.091 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 884845 00:32:59.091 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 884845 ']' 00:32:59.091 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 884845 00:32:59.091 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 884845 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 884845' 00:32:59.352 killing process with pid 884845 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 884845 00:32:59.352 Received shutdown signal, test time was about 2.000000 seconds 00:32:59.352 00:32:59.352 Latency(us) 00:32:59.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.352 =================================================================================================================== 00:32:59.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 884845 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=885557 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 885557 /var/tmp/bperf.sock 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 885557 ']' 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.352 23:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:59.352 [2024-09-30 23:01:26.349803] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:32:59.352 [2024-09-30 23:01:26.349864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885557 ] 00:32:59.612 [2024-09-30 23:01:26.424731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.612 [2024-09-30 23:01:26.480624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.183 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.183 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:00.183 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:00.183 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.443 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.704 nvme0n1 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:00.704 23:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.704 Running I/O for 2 seconds... 00:33:00.704 [2024-09-30 23:01:27.693980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:00.704 [2024-09-30 23:01:27.694731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.704 [2024-09-30 23:01:27.694756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.704 [2024-09-30 23:01:27.703743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:00.704 [2024-09-30 23:01:27.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.704 [2024-09-30 23:01:27.704690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.704 [2024-09-30 23:01:27.712466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb480 00:33:00.704 [2024-09-30 23:01:27.713343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.704 [2024-09-30 23:01:27.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.720295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5a90 00:33:00.965 [2024-09-30 23:01:27.721255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.721271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.729798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e3498 00:33:00.965 [2024-09-30 23:01:27.730887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.730906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.738291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e4578 00:33:00.965 [2024-09-30 23:01:27.739383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.739399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.746843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de470 00:33:00.965 [2024-09-30 23:01:27.747921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.747936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.755328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df550 00:33:00.965 [2024-09-30 23:01:27.756374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.763786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e0630 00:33:00.965 [2024-09-30 23:01:27.764867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.764882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.773314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fd208 00:33:00.965 [2024-09-30 23:01:27.774834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.774850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.779320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e49b0 00:33:00.965 [2024-09-30 23:01:27.780037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.780053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.965 [2024-09-30 23:01:27.787806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb8b8 00:33:00.965 [2024-09-30 23:01:27.788523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.965 [2024-09-30 23:01:27.788539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.796264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4b08 00:33:00.966 [2024-09-30 23:01:27.796973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.796989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.804869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f5be8 00:33:00.966 [2024-09-30 23:01:27.805593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.805609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.813317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f6cc8 00:33:00.966 [2024-09-30 23:01:27.814044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.814060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.821741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:00.966 [2024-09-30 23:01:27.822481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.822500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.830179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f92c0 00:33:00.966 [2024-09-30 23:01:27.830911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.838624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fa3a0 00:33:00.966 [2024-09-30 23:01:27.839341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.839357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.847067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e84c0 00:33:00.966 [2024-09-30 23:01:27.847758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.847773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.855508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0ff8 00:33:00.966 [2024-09-30 23:01:27.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.856205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.863943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e27f0 00:33:00.966 [2024-09-30 23:01:27.864674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.864689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.872377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e38d0 00:33:00.966 [2024-09-30 23:01:27.873067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.873083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.880814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e49b0 00:33:00.966 [2024-09-30 23:01:27.881534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.881550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.889274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6738 00:33:00.966 [2024-09-30 23:01:27.889971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.889988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.897713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:00.966 [2024-09-30 23:01:27.898441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.898457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.906175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ef6a8 00:33:00.966 [2024-09-30 23:01:27.906923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.906938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.914605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0788 00:33:00.966 [2024-09-30 23:01:27.915323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.915339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.923053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:00.966 [2024-09-30 23:01:27.923784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.923800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.931516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6b70 00:33:00.966 [2024-09-30 23:01:27.932253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.966 [2024-09-30 23:01:27.939981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4f40 00:33:00.966 [2024-09-30 23:01:27.940717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.966 [2024-09-30 23:01:27.940733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.967 [2024-09-30 23:01:27.948420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f6020 00:33:00.967 [2024-09-30 23:01:27.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.967 [2024-09-30 23:01:27.949179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.967 [2024-09-30 23:01:27.956866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7100 00:33:00.967 [2024-09-30 23:01:27.957606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.967 [2024-09-30 23:01:27.957624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.967 [2024-09-30 23:01:27.965311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8e88 00:33:00.967 [2024-09-30 23:01:27.965989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.967 [2024-09-30 23:01:27.966005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.967 [2024-09-30 23:01:27.973778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f9f68 00:33:00.967 [2024-09-30 23:01:27.974504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.967 [2024-09-30 23:01:27.974520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:27.983341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e8088 00:33:01.229 [2024-09-30 23:01:27.984525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:27.984541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:27.991628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8a50 00:33:01.229 [2024-09-30 23:01:27.992264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:27.992280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.000269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e99d8 00:33:01.229 [2024-09-30 23:01:28.001122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.001138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.008730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:01.229 [2024-09-30 23:01:28.009579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.009594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.017216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f20d8 00:33:01.229 [2024-09-30 23:01:28.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.018115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.025685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7538 00:33:01.229 [2024-09-30 23:01:28.026515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.026531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.034155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:01.229 [2024-09-30 23:01:28.035008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.035024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.042668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e99d8 00:33:01.229 [2024-09-30 23:01:28.043504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.043523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.051161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:01.229 [2024-09-30 23:01:28.052019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.052035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.059611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f20d8 00:33:01.229 [2024-09-30 23:01:28.060469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.060485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.068077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7538 00:33:01.229 [2024-09-30 23:01:28.068926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.068941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.076537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:01.229 [2024-09-30 23:01:28.077380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.077395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.084995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e99d8 00:33:01.229 [2024-09-30 23:01:28.085847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.085862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.093440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:01.229 [2024-09-30 23:01:28.094292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.094308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.101913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f20d8 00:33:01.229 [2024-09-30 23:01:28.102765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.102781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.110458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7538 00:33:01.229 [2024-09-30 23:01:28.111314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.111330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.118940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:01.229 [2024-09-30 23:01:28.119749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.119765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.127616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e2c28 00:33:01.229 [2024-09-30 23:01:28.128629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.128644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.136074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ee5c8 00:33:01.229 [2024-09-30 23:01:28.137044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.229 [2024-09-30 23:01:28.137059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:01.229 [2024-09-30 23:01:28.144671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ed4e8 00:33:01.230 [2024-09-30 23:01:28.145553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.145569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.153456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ff3c8 00:33:01.230 [2024-09-30 23:01:28.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.154598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.160505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f2510 00:33:01.230 [2024-09-30 23:01:28.161142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.161158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.168911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f35f0 00:33:01.230 [2024-09-30 23:01:28.169584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.169599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.178461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0350 00:33:01.230 [2024-09-30 23:01:28.179577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.179593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.185969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e4578 00:33:01.230 [2024-09-30 23:01:28.186434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.186449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.195690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:01.230 [2024-09-30 23:01:28.196829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.196845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.203844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f20d8 00:33:01.230 [2024-09-30 23:01:28.204527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.204544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.212560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e4578 00:33:01.230 [2024-09-30 23:01:28.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.213549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.220944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ec840 00:33:01.230 [2024-09-30 23:01:28.221974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.221990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.229388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8e88 00:33:01.230 [2024-09-30 23:01:28.230383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.230399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.230 [2024-09-30 23:01:28.237855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6fa8 00:33:01.230 [2024-09-30 23:01:28.238859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.230 [2024-09-30 23:01:28.238875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.246315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df550 00:33:01.492 [2024-09-30 23:01:28.247323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.247339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.255885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fda78 00:33:01.492 [2024-09-30 23:01:28.257343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.257358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.261882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e27f0 00:33:01.492 [2024-09-30 23:01:28.262563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.262584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.270332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb048 00:33:01.492 [2024-09-30 23:01:28.270982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.270998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.278930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fa3a0 00:33:01.492 [2024-09-30 23:01:28.279556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.279572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.287368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ea680 00:33:01.492 [2024-09-30 23:01:28.288043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.288059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.295844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4b08 00:33:01.492 [2024-09-30 23:01:28.296530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.296546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.303758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6738 00:33:01.492 [2024-09-30 23:01:28.304409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.304426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.313176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7538 00:33:01.492 [2024-09-30 23:01:28.313910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.313926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.321634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:01.492 [2024-09-30 23:01:28.322417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.322433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.492 [2024-09-30 23:01:28.330098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df118 00:33:01.492 [2024-09-30 23:01:28.330872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.492 [2024-09-30 23:01:28.330888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.338554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e9168 00:33:01.493 [2024-09-30 23:01:28.339343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.339359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.347018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4298 00:33:01.493 [2024-09-30 23:01:28.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.355481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ec408 00:33:01.493 [2024-09-30 23:01:28.356284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.356300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.363928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f96f8 00:33:01.493 [2024-09-30 23:01:28.364702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.364718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.372356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de470 00:33:01.493 [2024-09-30 23:01:28.373117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.373133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.380795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6300 00:33:01.493 [2024-09-30 23:01:28.381591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.381607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.389250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb8b8 00:33:01.493 [2024-09-30 23:01:28.390030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.390046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.397694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc998 00:33:01.493 [2024-09-30 23:01:28.398476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.398491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.406158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fe2e8 00:33:01.493 [2024-09-30 23:01:28.406952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.414591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7da8 00:33:01.493 [2024-09-30 23:01:28.415391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.415407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.423044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f31b8 00:33:01.493 [2024-09-30 23:01:28.423839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.423855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.431504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1b48 00:33:01.493 [2024-09-30 23:01:28.432150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.432166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.440005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0788 00:33:01.493 [2024-09-30 23:01:28.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.440811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.448469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f2948 00:33:01.493 [2024-09-30 23:01:28.449245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.449261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.456931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7100 00:33:01.493 [2024-09-30 23:01:28.457709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.457725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.465369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5220 00:33:01.493 [2024-09-30 23:01:28.466164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.466179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.473829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:01.493 [2024-09-30 23:01:28.474614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.474630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.482298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e8d30 00:33:01.493 [2024-09-30 23:01:28.483072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.483090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.490762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ec840 00:33:01.493 [2024-09-30 23:01:28.491550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.491566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.493 [2024-09-30 23:01:28.499210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8e88 00:33:01.493 [2024-09-30 23:01:28.499990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.493 [2024-09-30 23:01:28.500005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.507659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6fa8 00:33:01.755 [2024-09-30 23:01:28.508442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.508458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.516118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df550 00:33:01.755 [2024-09-30 23:01:28.516921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.516936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.524584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de038 00:33:01.755 [2024-09-30 23:01:28.525381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.525397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.533047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fbcf0 00:33:01.755 [2024-09-30 23:01:28.533836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.541504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ff3c8 00:33:01.755 [2024-09-30 23:01:28.542296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.542312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.549956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fdeb0 00:33:01.755 [2024-09-30 23:01:28.550744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.558412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f2510 00:33:01.755 [2024-09-30 23:01:28.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.559174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.566865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e27f0 00:33:01.755 [2024-09-30 23:01:28.567666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.567682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.575319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e4de8 00:33:01.755 [2024-09-30 23:01:28.576094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.576110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.583777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dfdc0 00:33:01.755 [2024-09-30 23:01:28.584557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.584573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.592222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f7538 00:33:01.755 [2024-09-30 23:01:28.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.593014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.600664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:01.755 [2024-09-30 23:01:28.601442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.601458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.609125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df118 00:33:01.755 [2024-09-30 23:01:28.609870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.609885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.617580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e9168 00:33:01.755 [2024-09-30 23:01:28.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.618339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.626051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4298 00:33:01.755 [2024-09-30 23:01:28.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.626855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.634509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ec408 00:33:01.755 [2024-09-30 23:01:28.635290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.635306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.642968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f96f8 00:33:01.755 [2024-09-30 23:01:28.643760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.643775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.651410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de470 00:33:01.755 [2024-09-30 23:01:28.652191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.652207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.659868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6300 00:33:01.755 [2024-09-30 23:01:28.660643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.668334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb8b8 00:33:01.755 [2024-09-30 23:01:28.669095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.669111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.676800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc998 00:33:01.755 [2024-09-30 23:01:28.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.677599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:01.755 29915.00 IOPS, 116.86 MiB/s [2024-09-30 23:01:28.685268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fda78 00:33:01.755 [2024-09-30 23:01:28.686061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.686077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.693699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e23b8 00:33:01.755 [2024-09-30 23:01:28.694483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.694498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.755 [2024-09-30 23:01:28.702303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dfdc0 00:33:01.755 [2024-09-30 23:01:28.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.755 [2024-09-30 23:01:28.703079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.710752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:01.756 [2024-09-30 23:01:28.711526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.711542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.719223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e9168 00:33:01.756 [2024-09-30 23:01:28.719994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.720010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.727676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ec408 00:33:01.756 [2024-09-30 23:01:28.728462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.728478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.736111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de470 00:33:01.756 [2024-09-30 23:01:28.736886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.736904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.744547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb8b8 00:33:01.756 [2024-09-30 23:01:28.745341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.745356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.753148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fe2e8 00:33:01.756 [2024-09-30 23:01:28.753917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.753933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.761589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f31b8 00:33:01.756 [2024-09-30 23:01:28.762364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.756 [2024-09-30 23:01:28.762380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:01.756 [2024-09-30 23:01:28.770060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0788 00:33:02.017 [2024-09-30 23:01:28.770853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.770870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.778497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0bc0 00:33:02.017 [2024-09-30 23:01:28.779275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.779291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.786950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5ec8 00:33:02.017 [2024-09-30 23:01:28.787727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.787742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.795375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1710 00:33:02.017 [2024-09-30 23:01:28.796163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.803837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8a50 00:33:02.017 [2024-09-30 23:01:28.804634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.804650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.812377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df988 00:33:02.017 [2024-09-30 23:01:28.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.813189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.820818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc128 00:33:02.017 [2024-09-30 23:01:28.821600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.017 [2024-09-30 23:01:28.821616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.017 [2024-09-30 23:01:28.829250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fda78 00:33:02.017 [2024-09-30 23:01:28.830044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.830060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.837709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e23b8 00:33:02.018 [2024-09-30 23:01:28.838500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.838516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.846230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dfdc0 00:33:02.018 [2024-09-30 23:01:28.847003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.847018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.854700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:02.018 [2024-09-30 23:01:28.855499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.855515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.863165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e9168 00:33:02.018 [2024-09-30 23:01:28.863940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.863955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.871824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ee5c8 00:33:02.018 [2024-09-30 23:01:28.872343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.872359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.880512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f5be8 00:33:02.018 [2024-09-30 23:01:28.881393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.881408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.888876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eb328 00:33:02.018 [2024-09-30 23:01:28.889765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.889780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.897608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fb480 00:33:02.018 [2024-09-30 23:01:28.898244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.898260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.906694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:02.018 [2024-09-30 23:01:28.907815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.907830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.915116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1f80 00:33:02.018 [2024-09-30 23:01:28.916261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.916277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.923719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:02.018 [2024-09-30 23:01:28.924877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.924897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.932184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f46d0 00:33:02.018 [2024-09-30 23:01:28.933342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.933359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.940656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:02.018 [2024-09-30 23:01:28.941812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.941828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.949139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:02.018 [2024-09-30 23:01:28.950295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.957613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eb760 00:33:02.018 [2024-09-30 23:01:28.958776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.958792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.966042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1f80 00:33:02.018 [2024-09-30 23:01:28.967210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.967226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.974505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:02.018 [2024-09-30 23:01:28.975663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.975679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.982968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f46d0 00:33:02.018 [2024-09-30 23:01:28.984082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.984097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.991454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:02.018 [2024-09-30 23:01:28.992611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:28.992627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:28.999932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:02.018 [2024-09-30 23:01:29.001097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:29.001113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:29.008402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eb760 00:33:02.018 [2024-09-30 23:01:29.009559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:29.009574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:29.016863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1f80 00:33:02.018 [2024-09-30 23:01:29.018036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.018 [2024-09-30 23:01:29.018052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.018 [2024-09-30 23:01:29.025336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:02.019 [2024-09-30 23:01:29.026489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.019 [2024-09-30 23:01:29.026505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.033814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f46d0 00:33:02.280 [2024-09-30 23:01:29.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.035006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.042300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:02.280 [2024-09-30 23:01:29.043454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.043470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.050778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:02.280 [2024-09-30 23:01:29.051942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.051957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.059242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eb760 00:33:02.280 [2024-09-30 23:01:29.060389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.060405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.067701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1f80 00:33:02.280 [2024-09-30 23:01:29.068861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.068876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.076177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:02.280 [2024-09-30 23:01:29.077324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.077340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.084651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f46d0 00:33:02.280 [2024-09-30 23:01:29.085805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.085821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.093146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dece0 00:33:02.280 [2024-09-30 23:01:29.094297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.094313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.101598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc560 00:33:02.280 [2024-09-30 23:01:29.102751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.102766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.109457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eb760 00:33:02.280 [2024-09-30 23:01:29.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.110579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.117025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e3060 00:33:02.280 [2024-09-30 23:01:29.117594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.117609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.125478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f92c0 00:33:02.280 [2024-09-30 23:01:29.125912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.133935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6b70 00:33:02.280 [2024-09-30 23:01:29.134440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.134455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.142684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f96f8 00:33:02.280 [2024-09-30 23:01:29.143443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.143461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.151139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de470 00:33:02.280 [2024-09-30 23:01:29.151913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.151928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.159561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f3e60 00:33:02.280 [2024-09-30 23:01:29.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.160365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.168008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fac10 00:33:02.280 [2024-09-30 23:01:29.168663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.280 [2024-09-30 23:01:29.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.280 [2024-09-30 23:01:29.176730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e0ea0 00:33:02.280 [2024-09-30 23:01:29.177642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.177657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.185367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ee190 00:33:02.281 [2024-09-30 23:01:29.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.186318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.193835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e99d8 00:33:02.281 [2024-09-30 23:01:29.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.194788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.202292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df118 00:33:02.281 [2024-09-30 23:01:29.203220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.203236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.210756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5658 00:33:02.281 [2024-09-30 23:01:29.211691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.211706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.219216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4b08 00:33:02.281 [2024-09-30 23:01:29.220107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.220123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.227665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198dfdc0 00:33:02.281 [2024-09-30 23:01:29.228588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.228604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.236114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e73e0 00:33:02.281 [2024-09-30 23:01:29.237040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.237056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.244548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fc998 00:33:02.281 [2024-09-30 23:01:29.245483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.245499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.253035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e49b0 00:33:02.281 [2024-09-30 23:01:29.253969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.253984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.261486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6b70 00:33:02.281 [2024-09-30 23:01:29.262405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.262420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.269954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e38d0 00:33:02.281 [2024-09-30 23:01:29.270877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.270896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.278400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eaab8 00:33:02.281 [2024-09-30 23:01:29.279293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.279309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.286831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f9b30 00:33:02.281 [2024-09-30 23:01:29.287751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.281 [2024-09-30 23:01:29.287767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.281 [2024-09-30 23:01:29.295264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fef90 00:33:02.543 [2024-09-30 23:01:29.296208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.296224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.303716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fda78 00:33:02.543 [2024-09-30 23:01:29.304628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.304644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.312159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f20d8 00:33:02.543 [2024-09-30 23:01:29.313055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.313071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.320603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ee5c8 00:33:02.543 [2024-09-30 23:01:29.321514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.321530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.329047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fd640 00:33:02.543 [2024-09-30 23:01:29.329974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.329990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.337470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e5a90 00:33:02.543 [2024-09-30 23:01:29.338382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.338398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.345904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ef270 00:33:02.543 [2024-09-30 23:01:29.346855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.346870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.354359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f6890 00:33:02.543 [2024-09-30 23:01:29.355280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.355296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.362801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eff18 00:33:02.543 [2024-09-30 23:01:29.363677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.363695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.371245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e95a0 00:33:02.543 [2024-09-30 23:01:29.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.372192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.379678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1f80 00:33:02.543 [2024-09-30 23:01:29.380588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.380604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.388127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ea248 00:33:02.543 [2024-09-30 23:01:29.389002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.389018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.396559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df988 00:33:02.543 [2024-09-30 23:01:29.397433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.397449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.405023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e9168 00:33:02.543 [2024-09-30 23:01:29.405948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.405963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.413469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f3a28 00:33:02.543 [2024-09-30 23:01:29.414402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.414417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.421920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ddc00 00:33:02.543 [2024-09-30 23:01:29.422848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.422864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.430366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f2510 00:33:02.543 [2024-09-30 23:01:29.431300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.431316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.438812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e27f0 00:33:02.543 [2024-09-30 23:01:29.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.439754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.447262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e4de8 00:33:02.543 [2024-09-30 23:01:29.448177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.448193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.455729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f2948 00:33:02.543 [2024-09-30 23:01:29.456631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.456647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.464170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ee190 00:33:02.543 [2024-09-30 23:01:29.465100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.465116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.472610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e99d8 00:33:02.543 [2024-09-30 23:01:29.473543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.473560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.482114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df118 00:33:02.543 [2024-09-30 23:01:29.483466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.483482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.543 [2024-09-30 23:01:29.489608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f46d0 00:33:02.543 [2024-09-30 23:01:29.490260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.543 [2024-09-30 23:01:29.490276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.498478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f6458 00:33:02.544 [2024-09-30 23:01:29.499530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.499546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.506847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e1710 00:33:02.544 [2024-09-30 23:01:29.507888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.507905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.515269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198de8a8 00:33:02.544 [2024-09-30 23:01:29.516268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.516283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.523699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f6cc8 00:33:02.544 [2024-09-30 23:01:29.524729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.524744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.532120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198eaef0 00:33:02.544 [2024-09-30 23:01:29.533141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.533157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.540585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e3498 00:33:02.544 [2024-09-30 23:01:29.541617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.541632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.549057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ed920 00:33:02.544 [2024-09-30 23:01:29.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.544 [2024-09-30 23:01:29.550088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.544 [2024-09-30 23:01:29.557518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e3060 00:33:02.805 [2024-09-30 23:01:29.558544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.805 [2024-09-30 23:01:29.558560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.805 [2024-09-30 23:01:29.565948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f1868 00:33:02.805 [2024-09-30 23:01:29.566987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.805 [2024-09-30 23:01:29.567002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.805 [2024-09-30 23:01:29.574378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f4f40 00:33:02.805 [2024-09-30 23:01:29.575421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.805 [2024-09-30 23:01:29.575437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.805 [2024-09-30 23:01:29.582808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e7818 00:33:02.805 [2024-09-30 23:01:29.583855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.805 [2024-09-30 23:01:29.583870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.805 [2024-09-30 23:01:29.591251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e88f8 00:33:02.805 [2024-09-30 23:01:29.592287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.805 [2024-09-30 23:01:29.592302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.805 [2024-09-30 23:01:29.599697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198fac10 00:33:02.805 [2024-09-30 23:01:29.600746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.600761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.608130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f0350 00:33:02.806 [2024-09-30 23:01:29.609177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.609192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.616547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198ecc78 00:33:02.806 [2024-09-30 23:01:29.617594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.617609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.624982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f8a50 00:33:02.806 [2024-09-30 23:01:29.626012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.626027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.633430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f57b0 00:33:02.806 [2024-09-30 23:01:29.634481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.634496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.641871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198df550 00:33:02.806 [2024-09-30 23:01:29.642939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.642954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.650354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6fa8 00:33:02.806 [2024-09-30 23:01:29.651388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.651404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.658796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f81e0 00:33:02.806 [2024-09-30 23:01:29.659789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.659807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.667221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198e6300 00:33:02.806 [2024-09-30 23:01:29.668245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.668260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.675652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f35f0 00:33:02.806 [2024-09-30 23:01:29.676642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.676658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 [2024-09-30 23:01:29.684078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3c20) with pdu=0x2000198f1430 00:33:02.806 30061.00 IOPS, 117.43 MiB/s [2024-09-30 23:01:29.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.806 [2024-09-30 23:01:29.685307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:02.806 00:33:02.806 Latency(us) 00:33:02.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.806 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.806 nvme0n1 : 2.00 30079.87 117.50 0.00 0.00 4250.04 1686.19 13434.88 00:33:02.806 =================================================================================================================== 00:33:02.806 Total : 30079.87 117.50 0.00 0.00 4250.04 1686.19 13434.88 00:33:02.806 { 00:33:02.806 "results": [ 00:33:02.806 { 00:33:02.806 "job": "nvme0n1", 00:33:02.806 "core_mask": "0x2", 00:33:02.806 "workload": "randwrite", 00:33:02.806 "status": "finished", 00:33:02.806 "queue_depth": 128, 00:33:02.806 "io_size": 4096, 00:33:02.806 "runtime": 2.00453, 00:33:02.806 "iops": 30079.869096496437, 00:33:02.806 "mibps": 117.49948865818921, 00:33:02.806 "io_failed": 0, 00:33:02.806 "io_timeout": 0, 00:33:02.806 "avg_latency_us": 4250.044498695325, 00:33:02.806 "min_latency_us": 1686.1866666666667, 00:33:02.806 "max_latency_us": 13434.88 00:33:02.806 } 00:33:02.806 ], 00:33:02.806 "core_count": 1 00:33:02.806 } 00:33:02.806 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:02.806 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:02.806 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:02.806 | .driver_specific 00:33:02.806 | .nvme_error 00:33:02.806 | .status_code 00:33:02.806 | .command_transient_transport_error' 00:33:02.806 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 885557 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 885557 ']' 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 885557 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885557 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885557' 00:33:03.067 killing process with pid 885557 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 885557 00:33:03.067 Received shutdown signal, test time was about 2.000000 seconds 00:33:03.067 00:33:03.067 Latency(us) 00:33:03.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.067 =================================================================================================================== 00:33:03.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.067 23:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 885557 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886353 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886353 /var/tmp/bperf.sock 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 886353 ']' 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.067 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:03.328 [2024-09-30 23:01:30.128870] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:03.328 [2024-09-30 23:01:30.128935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886353 ] 00:33:03.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:03.328 Zero copy mechanism will not be used. 00:33:03.328 [2024-09-30 23:01:30.206064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.328 [2024-09-30 23:01:30.259536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.268 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.268 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:04.268 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.268 23:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.268 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.531 nvme0n1 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:04.531 23:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:04.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:04.531 Zero copy mechanism will not be used. 00:33:04.531 Running I/O for 2 seconds... 00:33:04.531 [2024-09-30 23:01:31.460924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.461143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.461171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.464680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.464882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.468946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.469142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.469162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.472647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.472838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.472855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.476848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.477047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.477064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.481027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.481220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.481236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.486979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.487172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.487188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.490679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.490871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.490887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.495011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.495200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.495216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.503914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.504280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.504298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.511694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.512018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.512037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.516697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.516889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.516911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.521043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.521236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.521256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.527115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.527306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.527323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.530957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.531004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.531020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.539228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.539478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.531 [2024-09-30 23:01:31.544050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.531 [2024-09-30 23:01:31.544243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.531 [2024-09-30 23:01:31.544259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.547858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.548054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.548070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.556357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.556657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.556675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.566678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.566988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.567005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.573555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.573746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.573762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.582819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.583023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.583040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.586656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.586843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.586859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.590556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.590748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.590765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.594482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.594673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.594689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.599270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.599382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.599398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.604231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.604423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.604439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.608716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.608922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.612857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.613060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.613077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.619081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.619276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.619291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.628530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.628848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.636771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.637099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.637117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.794 [2024-09-30 23:01:31.644296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.794 [2024-09-30 23:01:31.644485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.794 [2024-09-30 23:01:31.644501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.648402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.648594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.648611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.653316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.653504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.653521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.657202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.657391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.657407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.663602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.663935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.663952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.668828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.669034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.669051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.676860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.677000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.677020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.686315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.686516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.686532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.696549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.696803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.696820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.706741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.706817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.706832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.717711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.717962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.717979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.728556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.728783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.728799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.739406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.739752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.750070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.750291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.750307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.761126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.761346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.761363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.772725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.772966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.772982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.785059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.785278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.785294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.796284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.796491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.796508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:04.795 [2024-09-30 23:01:31.807241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:04.795 [2024-09-30 23:01:31.807419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.795 [2024-09-30 23:01:31.807435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.058 [2024-09-30 23:01:31.816564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.058 [2024-09-30 23:01:31.816815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-09-30 23:01:31.816832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.058 [2024-09-30 23:01:31.827793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.058 [2024-09-30 23:01:31.828051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.828068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.837082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.837328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.837344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.840753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.840928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.840944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.844550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.844718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.844734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.848031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.848201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.848218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.851503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.851671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.851687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.855052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.855219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.855235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.858524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.858691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.858707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.861907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.862075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.862091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.865206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.865372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.865388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.868458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.868625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.868641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.871969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.872176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.872192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.875821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.875996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.876017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.878822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.878995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.879011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.883799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.884084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.884102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.890566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.893758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.893931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.893947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.898270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.898559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.898576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.903558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.903837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.903852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.910774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.910831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.910847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.914550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.914610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.918157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.918207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.918222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.923688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.923739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.927415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.927457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.927473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.931185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.931231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.931246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.936141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.936208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.936222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.940659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.059 [2024-09-30 23:01:31.940704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-09-30 23:01:31.940719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.059 [2024-09-30 23:01:31.944146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.944194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.944209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.949033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.949088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.949103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.952463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.952516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.952534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.955758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.955816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.955830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.959275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.959331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.962946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.962989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.963004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.966376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.966422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.966437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.969766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.969809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.969825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.973170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.973228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.973243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.979365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.979421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.979437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.983561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.983628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.983643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.990976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.991022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.991040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.994264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.994310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.994325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:31.997571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:31.997622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:31.997637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.001271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.001340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.001355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.004953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.004997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.005012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.008641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.008684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.008698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.012248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.012295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.015645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.015691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.015706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.019464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.019533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.019548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.022556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.022621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.026019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.026065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.026081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.029654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.029702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.029717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.032837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.032889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.032909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.035976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.036024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.036039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.039134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.039178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.039193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.042292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.042341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.060 [2024-09-30 23:01:32.042356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.060 [2024-09-30 23:01:32.048771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.060 [2024-09-30 23:01:32.048814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.048828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.054241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.057453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.057504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.057519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.060655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.060704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.060718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.064724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.064793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.064808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.068419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.068463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.068478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.061 [2024-09-30 23:01:32.071913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.061 [2024-09-30 23:01:32.071972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-09-30 23:01:32.071987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.079128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.079194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.079209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.088258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.088561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.098499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.098681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.098697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.107866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.107938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.107957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.114539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.114581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.114596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.119100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.119211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.119227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.123958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.124016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.128265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.128322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.128337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.134510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.134559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.134574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.142170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.142213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.147801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.147854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.147870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.154496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.154560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.154576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.158875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.158941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.158957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.164491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.164698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.164713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.172683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.172739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.172754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.180888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.180953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.180968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.184787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.323 [2024-09-30 23:01:32.184887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.323 [2024-09-30 23:01:32.184907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.323 [2024-09-30 23:01:32.190877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.190962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.190978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.198975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.199035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.199050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.204837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.204881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.204902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.209478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.209523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.209541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.214267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.214324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.214339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.218386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.218428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.218444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.224689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.224750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.224767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.231672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.231749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.241254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.241469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.245599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.245669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.245684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.250192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.250263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.250278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.256424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.256495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.256511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.262682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.262960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.262978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.270927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.271012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.271027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.278289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.278512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.278526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.287772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.287881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.298952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.299235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.299251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.309952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.310235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.310251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.321775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.322042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.322058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.324 [2024-09-30 23:01:32.332801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.324 [2024-09-30 23:01:32.333069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.324 [2024-09-30 23:01:32.333085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.343691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.343977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.343994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.348804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.348848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.348865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.356360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.356419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.356435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.364753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.365041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.365056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.372249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.372324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.372340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.380915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.380982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.380998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.388976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.389019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.389034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.398217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.398476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.398490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.405445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.405493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.405508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.410630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.410675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.410693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.415075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.415125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.415140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.420148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.420216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.420231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.428153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.428433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.428450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.436397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.436448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.436464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.586 5106.00 IOPS, 638.25 MiB/s [2024-09-30 23:01:32.447886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.448003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.448018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.459513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.459766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.471032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.471309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.471325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.482993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.586 [2024-09-30 23:01:32.483277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.586 [2024-09-30 23:01:32.483292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.586 [2024-09-30 23:01:32.494318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.494450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.504814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.505040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.505056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.515455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.515766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.525530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.525605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.525621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.536358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.536427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.536442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.545087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.545159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.554626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.554945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.563724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.564001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.572205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.580692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.580747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.580762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.586958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.587002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.587018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.587 [2024-09-30 23:01:32.595466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.587 [2024-09-30 23:01:32.595517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.587 [2024-09-30 23:01:32.595532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.604080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.604129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.604144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.612050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.612096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.612111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.620200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.620483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.620499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.625352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.625396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.625411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.632455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.632500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.632515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.642515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.642573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.642591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.650958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.651251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.651267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.661226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.661602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.661618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.672572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.672838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.684521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.684814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.684830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.696314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.696566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.696581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.707679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.707962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.707978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.719555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.719819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.719834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.730992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.742349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.742491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.742507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.753570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.753870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.753887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.766218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.766532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.766548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.776508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.776782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.776798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.788029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.788196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.788211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.799655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.799917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.799932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.810162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.810225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.810240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.849 [2024-09-30 23:01:32.821217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.849 [2024-09-30 23:01:32.821451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.849 [2024-09-30 23:01:32.821466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.850 [2024-09-30 23:01:32.831397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.850 [2024-09-30 23:01:32.831684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.850 [2024-09-30 23:01:32.831705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.850 [2024-09-30 23:01:32.841769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.850 [2024-09-30 23:01:32.841856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.850 [2024-09-30 23:01:32.841871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.850 [2024-09-30 23:01:32.852972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.850 [2024-09-30 23:01:32.853289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.850 [2024-09-30 23:01:32.853304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.850 [2024-09-30 23:01:32.862698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:05.850 [2024-09-30 23:01:32.863060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.850 [2024-09-30 23:01:32.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.873412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.873646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.873662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.882567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.882881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.891264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.891526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.896878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.896948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.896962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.900715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.900759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.900773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.904554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.904604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.908340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.908394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.908409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.912825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.912880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.916803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.916908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.921078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.921123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.921138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.927884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.927958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.927972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.934551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.941180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.941238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.941253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.947763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.947952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.947967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.952738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.952824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.952843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.956609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.956670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.960377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.960440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.960455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.963703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.963753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.963769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.968498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.968560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.968575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.112 [2024-09-30 23:01:32.976949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.112 [2024-09-30 23:01:32.977006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.112 [2024-09-30 23:01:32.977022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:32.980532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:32.980598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:32.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:32.984407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:32.984465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:32.984481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:32.988318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:32.988370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:32.988385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:32.992498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:32.992551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:32.992566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:32.998118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:32.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:32.998352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.004732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.004801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.004817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.009122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.009195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.009210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.016733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.016998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.017014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.027345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.027617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.027632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.037698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.037971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.037986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.048010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.048343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.048359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.052511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.052565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.052583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.060042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.060100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.060116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.064685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.064736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.064752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.068569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.068614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.068629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.072252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.072310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.072325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.076417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.076461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.076476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.080293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.084496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.084544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.084559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.092501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.092733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.097738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.097820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.101570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.101613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.101628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.105788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.105833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.105848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.109703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.109767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.109782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.113 [2024-09-30 23:01:33.113649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.113 [2024-09-30 23:01:33.113713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.113 [2024-09-30 23:01:33.113728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.114 [2024-09-30 23:01:33.117956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.114 [2024-09-30 23:01:33.118005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.114 [2024-09-30 23:01:33.118020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.114 [2024-09-30 23:01:33.121176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.114 [2024-09-30 23:01:33.121219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.114 [2024-09-30 23:01:33.121234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.114 [2024-09-30 23:01:33.124211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.114 [2024-09-30 23:01:33.124263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.114 [2024-09-30 23:01:33.124278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.130027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.130084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.130099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.134672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.134753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.138309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.138351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.138367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.142437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.142493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.150214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.150258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.150273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.157631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.157677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.157692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.161664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.161713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.161728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.165651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.165698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.165713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.169938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.169981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.169996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.175211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.175280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.175298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.179397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.179451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.179467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.183498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.183555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.183571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.187065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.187117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.187132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.190185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.190229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.190245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.193427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.193471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.197360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.197420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.197435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.200267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.200314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.200329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.203084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.203137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.203152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.206204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.206251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.376 [2024-09-30 23:01:33.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.376 [2024-09-30 23:01:33.209393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.376 [2024-09-30 23:01:33.209459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.213236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.213291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.213306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.216402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.216447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.216462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.219486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.219529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.219545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.222691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.222745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.222760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.225679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.225728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.225742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.229112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.229158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.229174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.234755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.234799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.234814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.238030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.238146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.238161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.244763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.244806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.244822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.248863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.248911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.248927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.253576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.253693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.253709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.258630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.258675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.258691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.266681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.266947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.272044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.272123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.275437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.275485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.275500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.278511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.278553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.284446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.284515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.284531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.290970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.291021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.291036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.296501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.296616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.296631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.303900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.304149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.304165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.312968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.313210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.313227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.323235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.323514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.332778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.333037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.333053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.343373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.343427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.343442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.353484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.353688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.377 [2024-09-30 23:01:33.364395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.377 [2024-09-30 23:01:33.364699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.377 [2024-09-30 23:01:33.364716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.378 [2024-09-30 23:01:33.374902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.378 [2024-09-30 23:01:33.375202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.378 [2024-09-30 23:01:33.375218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.378 [2024-09-30 23:01:33.384948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.378 [2024-09-30 23:01:33.385192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.378 [2024-09-30 23:01:33.385208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.395423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.395678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.644 [2024-09-30 23:01:33.395694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.405472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.644 [2024-09-30 23:01:33.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.415433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.415687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.644 [2024-09-30 23:01:33.415702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.426373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.426692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.644 [2024-09-30 23:01:33.426709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.434920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.435106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.644 [2024-09-30 23:01:33.435121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.644 [2024-09-30 23:01:33.441461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.644 [2024-09-30 23:01:33.441519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.645 [2024-09-30 23:01:33.441534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.645 [2024-09-30 23:01:33.445741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21d3f60) with pdu=0x2000198fef90 00:33:06.645 [2024-09-30 23:01:33.445798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.645 [2024-09-30 23:01:33.445813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.645 4795.00 IOPS, 599.38 MiB/s 00:33:06.645 Latency(us) 00:33:06.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.645 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:06.645 nvme0n1 : 2.00 4796.89 599.61 0.00 0.00 3331.63 1358.51 14636.37 00:33:06.645 =================================================================================================================== 00:33:06.645 Total : 4796.89 599.61 0.00 0.00 3331.63 1358.51 14636.37 00:33:06.645 { 00:33:06.645 "results": [ 00:33:06.645 { 00:33:06.645 "job": "nvme0n1", 00:33:06.645 "core_mask": "0x2", 00:33:06.645 "workload": "randwrite", 00:33:06.645 "status": "finished", 00:33:06.645 "queue_depth": 16, 00:33:06.645 "io_size": 131072, 00:33:06.645 "runtime": 2.00338, 00:33:06.645 "iops": 4796.8932504068125, 00:33:06.645 "mibps": 599.6116563008516, 00:33:06.645 "io_failed": 0, 00:33:06.645 "io_timeout": 0, 00:33:06.645 "avg_latency_us": 3331.6310620881027, 00:33:06.645 "min_latency_us": 1358.5066666666667, 00:33:06.645 "max_latency_us": 14636.373333333333 00:33:06.645 } 00:33:06.645 ], 00:33:06.645 "core_count": 1 00:33:06.645 } 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:06.645 | .driver_specific 00:33:06.645 | .nvme_error 00:33:06.645 | .status_code 00:33:06.645 | .command_transient_transport_error' 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 309 > 0 )) 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886353 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 886353 ']' 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 886353 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:06.645 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886353 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886353' 00:33:06.910 killing process with pid 886353 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 886353 00:33:06.910 Received shutdown signal, test time was about 2.000000 seconds 00:33:06.910 00:33:06.910 Latency(us) 00:33:06.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.910 =================================================================================================================== 00:33:06.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 886353 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 883911 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 883911 ']' 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 883911 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 883911 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 883911' 00:33:06.910 killing process with pid 883911 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 883911 00:33:06.910 23:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 883911 00:33:07.171 00:33:07.171 real 0m16.344s 00:33:07.171 user 0m32.316s 00:33:07.171 sys 0m3.620s 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:07.171 ************************************ 00:33:07.171 END TEST nvmf_digest_error 00:33:07.171 ************************************ 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.171 rmmod nvme_tcp 00:33:07.171 rmmod nvme_fabrics 00:33:07.171 rmmod nvme_keyring 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 883911 ']' 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 883911 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 883911 ']' 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 883911 00:33:07.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (883911) - No such process 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 883911 is not found' 00:33:07.171 Process with pid 883911 is not found 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.171 23:01:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.715 00:33:09.715 real 0m43.730s 00:33:09.715 user 1m8.332s 00:33:09.715 sys 0m13.391s 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.715 ************************************ 00:33:09.715 END TEST nvmf_digest 00:33:09.715 ************************************ 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.715 ************************************ 00:33:09.715 START TEST nvmf_bdevperf 00:33:09.715 ************************************ 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:09.715 * Looking for test storage... 00:33:09.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:09.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.715 --rc genhtml_branch_coverage=1 00:33:09.715 --rc genhtml_function_coverage=1 00:33:09.715 --rc genhtml_legend=1 00:33:09.715 --rc geninfo_all_blocks=1 00:33:09.715 --rc geninfo_unexecuted_blocks=1 00:33:09.715 00:33:09.715 ' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:09.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.715 --rc genhtml_branch_coverage=1 00:33:09.715 --rc genhtml_function_coverage=1 00:33:09.715 --rc genhtml_legend=1 00:33:09.715 --rc geninfo_all_blocks=1 00:33:09.715 --rc geninfo_unexecuted_blocks=1 00:33:09.715 00:33:09.715 ' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:09.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.715 --rc genhtml_branch_coverage=1 00:33:09.715 --rc genhtml_function_coverage=1 00:33:09.715 --rc genhtml_legend=1 00:33:09.715 --rc geninfo_all_blocks=1 00:33:09.715 --rc geninfo_unexecuted_blocks=1 00:33:09.715 00:33:09.715 ' 00:33:09.715 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:09.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.715 --rc genhtml_branch_coverage=1 00:33:09.715 --rc genhtml_function_coverage=1 00:33:09.715 --rc genhtml_legend=1 00:33:09.715 --rc geninfo_all_blocks=1 00:33:09.716 --rc geninfo_unexecuted_blocks=1 00:33:09.716 00:33:09.716 ' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.716 23:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:17.882 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:17.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:17.882 Found net devices under 0000:31:00.0: cvl_0_0 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:17.882 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:17.883 Found net devices under 0000:31:00.1: cvl_0_1 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.883 23:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:33:17.883 00:33:17.883 --- 10.0.0.2 ping statistics --- 00:33:17.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.883 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:33:17.883 00:33:17.883 --- 10.0.0.1 ping statistics --- 00:33:17.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.883 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=891318 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 891318 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 891318 ']' 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.883 23:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.883 [2024-09-30 23:01:44.306813] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:17.883 [2024-09-30 23:01:44.306878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.883 [2024-09-30 23:01:44.395278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:17.883 [2024-09-30 23:01:44.490850] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.883 [2024-09-30 23:01:44.490921] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.883 [2024-09-30 23:01:44.490930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.883 [2024-09-30 23:01:44.490937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.883 [2024-09-30 23:01:44.490944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.883 [2024-09-30 23:01:44.491110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.883 [2024-09-30 23:01:44.491381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.883 [2024-09-30 23:01:44.491382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.145 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.145 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:18.145 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:18.145 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.145 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 [2024-09-30 23:01:45.201763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 Malloc0 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.406 [2024-09-30 23:01:45.272644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:18.406 { 00:33:18.406 "params": { 00:33:18.406 "name": "Nvme$subsystem", 00:33:18.406 "trtype": "$TEST_TRANSPORT", 00:33:18.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.406 "adrfam": "ipv4", 00:33:18.406 "trsvcid": "$NVMF_PORT", 00:33:18.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.406 "hdgst": ${hdgst:-false}, 00:33:18.406 "ddgst": ${ddgst:-false} 00:33:18.406 }, 00:33:18.406 "method": "bdev_nvme_attach_controller" 00:33:18.406 } 00:33:18.406 EOF 00:33:18.406 )") 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:33:18.406 23:01:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:18.406 "params": { 00:33:18.406 "name": "Nvme1", 00:33:18.406 "trtype": "tcp", 00:33:18.406 "traddr": "10.0.0.2", 00:33:18.406 "adrfam": "ipv4", 00:33:18.406 "trsvcid": "4420", 00:33:18.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.406 "hdgst": false, 00:33:18.406 "ddgst": false 00:33:18.406 }, 00:33:18.406 "method": "bdev_nvme_attach_controller" 00:33:18.406 }' 00:33:18.406 [2024-09-30 23:01:45.331022] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:18.406 [2024-09-30 23:01:45.331092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891645 ] 00:33:18.406 [2024-09-30 23:01:45.413128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.667 [2024-09-30 23:01:45.509478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.928 Running I/O for 1 seconds... 00:33:19.871 9002.00 IOPS, 35.16 MiB/s 00:33:19.871 Latency(us) 00:33:19.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.871 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:19.871 Verification LBA range: start 0x0 length 0x4000 00:33:19.871 Nvme1n1 : 1.01 9068.63 35.42 0.00 0.00 14043.45 2034.35 12397.23 00:33:19.871 =================================================================================================================== 00:33:19.871 Total : 9068.63 35.42 0.00 0.00 14043.45 2034.35 12397.23 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=891988 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:20.132 { 00:33:20.132 "params": { 00:33:20.132 "name": "Nvme$subsystem", 00:33:20.132 "trtype": "$TEST_TRANSPORT", 00:33:20.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.132 "adrfam": "ipv4", 00:33:20.132 "trsvcid": "$NVMF_PORT", 00:33:20.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.132 "hdgst": ${hdgst:-false}, 00:33:20.132 "ddgst": ${ddgst:-false} 00:33:20.132 }, 00:33:20.132 "method": "bdev_nvme_attach_controller" 00:33:20.132 } 00:33:20.132 EOF 00:33:20.132 )") 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:33:20.132 23:01:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:20.132 "params": { 00:33:20.132 "name": "Nvme1", 00:33:20.132 "trtype": "tcp", 00:33:20.132 "traddr": "10.0.0.2", 00:33:20.132 "adrfam": "ipv4", 00:33:20.132 "trsvcid": "4420", 00:33:20.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:20.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:20.132 "hdgst": false, 00:33:20.132 "ddgst": false 00:33:20.132 }, 00:33:20.132 "method": "bdev_nvme_attach_controller" 00:33:20.132 }' 00:33:20.132 [2024-09-30 23:01:47.013172] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:20.132 [2024-09-30 23:01:47.013226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891988 ] 00:33:20.132 [2024-09-30 23:01:47.092366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.394 [2024-09-30 23:01:47.156237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.394 Running I/O for 15 seconds... 00:33:22.986 11154.00 IOPS, 43.57 MiB/s 11171.50 IOPS, 43.64 MiB/s 23:01:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 891318 00:33:22.986 23:01:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:22.986 [2024-09-30 23:01:49.967340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.986 [2024-09-30 23:01:49.967670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.986 [2024-09-30 23:01:49.967922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.986 [2024-09-30 23:01:49.967932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.967939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.967948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.967955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.967964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.967972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.967981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.967988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.987 [2024-09-30 23:01:49.968340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.987 [2024-09-30 23:01:49.968585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.987 [2024-09-30 23:01:49.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.968987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.968994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.988 [2024-09-30 23:01:49.969273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.988 [2024-09-30 23:01:49.969280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.989 [2024-09-30 23:01:49.969579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933c30 is same with the state(6) to be set 00:33:22.989 [2024-09-30 23:01:49.969597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:22.989 [2024-09-30 23:01:49.969603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:22.989 [2024-09-30 23:01:49.969609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106432 len:8 PRP1 0x0 PRP2 0x0 00:33:22.989 [2024-09-30 23:01:49.969617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.989 [2024-09-30 23:01:49.969655] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x933c30 was disconnected and freed. reset controller. 00:33:22.989 [2024-09-30 23:01:49.973251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.989 [2024-09-30 23:01:49.973301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:22.989 [2024-09-30 23:01:49.974221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.989 [2024-09-30 23:01:49.974261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:22.989 [2024-09-30 23:01:49.974272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:22.989 [2024-09-30 23:01:49.974511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:22.989 [2024-09-30 23:01:49.974731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.989 [2024-09-30 23:01:49.974741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.989 [2024-09-30 23:01:49.974750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.989 [2024-09-30 23:01:49.978263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.989 [2024-09-30 23:01:49.987355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.989 [2024-09-30 23:01:49.987976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.989 [2024-09-30 23:01:49.988015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:22.989 [2024-09-30 23:01:49.988028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:22.989 [2024-09-30 23:01:49.988269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:22.989 [2024-09-30 23:01:49.988488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.989 [2024-09-30 23:01:49.988497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.989 [2024-09-30 23:01:49.988505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.989 [2024-09-30 23:01:49.992019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.252 [2024-09-30 23:01:50.001307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.252 [2024-09-30 23:01:50.001921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.252 [2024-09-30 23:01:50.001965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.252 [2024-09-30 23:01:50.001976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.252 [2024-09-30 23:01:50.002213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.252 [2024-09-30 23:01:50.002916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.252 [2024-09-30 23:01:50.002930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.252 [2024-09-30 23:01:50.002938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.252 [2024-09-30 23:01:50.006451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.252 [2024-09-30 23:01:50.015127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.252 [2024-09-30 23:01:50.015679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.252 [2024-09-30 23:01:50.015698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.252 [2024-09-30 23:01:50.015707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.252 [2024-09-30 23:01:50.015930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.252 [2024-09-30 23:01:50.016148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.252 [2024-09-30 23:01:50.016158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.252 [2024-09-30 23:01:50.016165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.252 [2024-09-30 23:01:50.019679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.252 [2024-09-30 23:01:50.028970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.252 [2024-09-30 23:01:50.029503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.252 [2024-09-30 23:01:50.029520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.252 [2024-09-30 23:01:50.029528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.252 [2024-09-30 23:01:50.029744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.029967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.029976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.029983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.033480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.042774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.043430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.043472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.043484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.043723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.043957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.043967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.043975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.047484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.056571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.057214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.057257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.057270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.057514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.057735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.057743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.057751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.061330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.070426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.071005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.071049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.071062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.071305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.071527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.071537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.071544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.075065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.084354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.084913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.084935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.084943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.085161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.085378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.085386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.085393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.088908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.098210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.098746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.098765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.098773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.098996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.099213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.099221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.099229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.102748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.112055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.112689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.112734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.112746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.112995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.113217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.113226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.113235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.116743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.125842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.126323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.126346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.126354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.126572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.126789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.126797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.126804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.130316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.139611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.140277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.140324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.140341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.140584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.140806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.140815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.140823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.144350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.153446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.154032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.154081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.154094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.154340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.154561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.154572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.154580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.253 [2024-09-30 23:01:50.158108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.253 [2024-09-30 23:01:50.167203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.253 [2024-09-30 23:01:50.167918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.253 [2024-09-30 23:01:50.167968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.253 [2024-09-30 23:01:50.167980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.253 [2024-09-30 23:01:50.168227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.253 [2024-09-30 23:01:50.168449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.253 [2024-09-30 23:01:50.168458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.253 [2024-09-30 23:01:50.168466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.172004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.181110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.181769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.181823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.181836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.182094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.182317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.182332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.182339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.185865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.194988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.195624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.195651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.195660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.195879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.196106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.196118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.196125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.199644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.208771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.209228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.209253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.209261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.209480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.209698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.209707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.209714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.213243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.222568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.223047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.223071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.223079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.223297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.223515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.223525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.223532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.227052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.236353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.236910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.236933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.236943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.237162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.237388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.237399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.237407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.240930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.250235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.250952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.251016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.251030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.251283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.251507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.251520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.251529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.254 [2024-09-30 23:01:50.255080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.254 [2024-09-30 23:01:50.263995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.254 [2024-09-30 23:01:50.264575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.254 [2024-09-30 23:01:50.264603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.254 [2024-09-30 23:01:50.264612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.254 [2024-09-30 23:01:50.264832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.254 [2024-09-30 23:01:50.265060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.254 [2024-09-30 23:01:50.265072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.254 [2024-09-30 23:01:50.265080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.517 [2024-09-30 23:01:50.268604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.517 [2024-09-30 23:01:50.277931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.517 [2024-09-30 23:01:50.278519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.517 [2024-09-30 23:01:50.278580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.517 [2024-09-30 23:01:50.278594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.517 [2024-09-30 23:01:50.278855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.279093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.279104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.279112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.282640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.291761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.292441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.292503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.292516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.292769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.293005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.293015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.293024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.296557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.305672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.306339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.306402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.306415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.306667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.306892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.306914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.306923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.310472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 9864.00 IOPS, 38.53 MiB/s [2024-09-30 23:01:50.319608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.320209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.320239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.320249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.320469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.320688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.320698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.320713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.324243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.333543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.334234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.334296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.334309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.334562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.334787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.334796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.334804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.338356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.347472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.348221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.348283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.348296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.348548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.348772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.348781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.348789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.352331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.361428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.362230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.362291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.362304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.362556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.362780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.362789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.362798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.366344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.375254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.375973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.376043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.376058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.376312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.376536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.376547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.376554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.380099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.389208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.389936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.389999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.390011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.390264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.390488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.390496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.390504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.394050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.403156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.403828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.403890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.403917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.404169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.404407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.404417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.518 [2024-09-30 23:01:50.404425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.518 [2024-09-30 23:01:50.407957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.518 [2024-09-30 23:01:50.417069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.518 [2024-09-30 23:01:50.417801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.518 [2024-09-30 23:01:50.417862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.518 [2024-09-30 23:01:50.417874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.518 [2024-09-30 23:01:50.418143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.518 [2024-09-30 23:01:50.418375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.518 [2024-09-30 23:01:50.418385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.418393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.421918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.431020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.431643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.431670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.431679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.431910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.432130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.432139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.432147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.435663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.445045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.445653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.445715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.445728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.445995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.446220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.446230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.446238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.449766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.458864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.459549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.459611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.459624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.459877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.460117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.460127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.460135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.463671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.472782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.473475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.473538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.473551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.473803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.474044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.474055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.474064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.477597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.486698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.487345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.487374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.487383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.487603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.487830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.487840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.487848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.491377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.500471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.501175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.501237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.501250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.501503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.501726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.501735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.501743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.505302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.514231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.514848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.514921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.514942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.515196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.515419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.515430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.515439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.519 [2024-09-30 23:01:50.518985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.519 [2024-09-30 23:01:50.528080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.519 [2024-09-30 23:01:50.528759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.519 [2024-09-30 23:01:50.528820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.519 [2024-09-30 23:01:50.528833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.519 [2024-09-30 23:01:50.529100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.519 [2024-09-30 23:01:50.529325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.519 [2024-09-30 23:01:50.529334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.519 [2024-09-30 23:01:50.529342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.783 [2024-09-30 23:01:50.532862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.783 [2024-09-30 23:01:50.541996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.783 [2024-09-30 23:01:50.542629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.783 [2024-09-30 23:01:50.542658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.783 [2024-09-30 23:01:50.542667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.783 [2024-09-30 23:01:50.542888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.783 [2024-09-30 23:01:50.543122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.783 [2024-09-30 23:01:50.543133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.783 [2024-09-30 23:01:50.543142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.783 [2024-09-30 23:01:50.546657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.783 [2024-09-30 23:01:50.555749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.783 [2024-09-30 23:01:50.556405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.783 [2024-09-30 23:01:50.556467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.783 [2024-09-30 23:01:50.556479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.783 [2024-09-30 23:01:50.556733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.783 [2024-09-30 23:01:50.556974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.783 [2024-09-30 23:01:50.556994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.783 [2024-09-30 23:01:50.557002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.783 [2024-09-30 23:01:50.560535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.783 [2024-09-30 23:01:50.569654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.783 [2024-09-30 23:01:50.570353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.783 [2024-09-30 23:01:50.570414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.783 [2024-09-30 23:01:50.570427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.783 [2024-09-30 23:01:50.570680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.783 [2024-09-30 23:01:50.570919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.783 [2024-09-30 23:01:50.570929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.783 [2024-09-30 23:01:50.570937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.783 [2024-09-30 23:01:50.574473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.783 [2024-09-30 23:01:50.583575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.783 [2024-09-30 23:01:50.584259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.783 [2024-09-30 23:01:50.584321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.783 [2024-09-30 23:01:50.584334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.783 [2024-09-30 23:01:50.584587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.783 [2024-09-30 23:01:50.584811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.783 [2024-09-30 23:01:50.584819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.783 [2024-09-30 23:01:50.584827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.783 [2024-09-30 23:01:50.588387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.783 [2024-09-30 23:01:50.597505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.783 [2024-09-30 23:01:50.598135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.783 [2024-09-30 23:01:50.598164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.783 [2024-09-30 23:01:50.598172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.783 [2024-09-30 23:01:50.598393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.783 [2024-09-30 23:01:50.598611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.598620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.598628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.602150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.611272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.611863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.611886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.611906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.612126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.612344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.612354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.612362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.615873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.625187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.625849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.625923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.625937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.626189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.626413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.626425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.626433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.629970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.639092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.639778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.639839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.639851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.640122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.640347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.640356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.640364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.643898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.653007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.653635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.653663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.653672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.653920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.654142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.654153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.654160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.657675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.666773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.667428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.667489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.667502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.667755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.667998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.668009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.668017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.671554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.680662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.681258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.681318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.681331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.681583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.681807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.681816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.681824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.685370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.694490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.695171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.695231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.695244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.695497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.695721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.695730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.695746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.699552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.708289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.708923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.708952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.708961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.709182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.709402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.709413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.709420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.712947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.722056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.722715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.722777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.722789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.723059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.723284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.723295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.723303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.726840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.784 [2024-09-30 23:01:50.735969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.784 [2024-09-30 23:01:50.736475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.784 [2024-09-30 23:01:50.736504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.784 [2024-09-30 23:01:50.736512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.784 [2024-09-30 23:01:50.736733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.784 [2024-09-30 23:01:50.736962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.784 [2024-09-30 23:01:50.736973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.784 [2024-09-30 23:01:50.736981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.784 [2024-09-30 23:01:50.740508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.785 [2024-09-30 23:01:50.749818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.785 [2024-09-30 23:01:50.750411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.785 [2024-09-30 23:01:50.750443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.785 [2024-09-30 23:01:50.750451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.785 [2024-09-30 23:01:50.750671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.785 [2024-09-30 23:01:50.750889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.785 [2024-09-30 23:01:50.750910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.785 [2024-09-30 23:01:50.750918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.785 [2024-09-30 23:01:50.754432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.785 [2024-09-30 23:01:50.762474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.785 [2024-09-30 23:01:50.763104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.785 [2024-09-30 23:01:50.763159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.785 [2024-09-30 23:01:50.763169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.785 [2024-09-30 23:01:50.763353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.785 [2024-09-30 23:01:50.763509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.785 [2024-09-30 23:01:50.763516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.785 [2024-09-30 23:01:50.763523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.785 [2024-09-30 23:01:50.765963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.785 [2024-09-30 23:01:50.775195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.785 [2024-09-30 23:01:50.775741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.785 [2024-09-30 23:01:50.775763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.785 [2024-09-30 23:01:50.775769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.785 [2024-09-30 23:01:50.775932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.785 [2024-09-30 23:01:50.776085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.785 [2024-09-30 23:01:50.776092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.785 [2024-09-30 23:01:50.776097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.785 [2024-09-30 23:01:50.778513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.785 [2024-09-30 23:01:50.787852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.785 [2024-09-30 23:01:50.788441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.785 [2024-09-30 23:01:50.788489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:23.785 [2024-09-30 23:01:50.788498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:23.785 [2024-09-30 23:01:50.788675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:23.785 [2024-09-30 23:01:50.788836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.785 [2024-09-30 23:01:50.788842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.785 [2024-09-30 23:01:50.788848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.785 [2024-09-30 23:01:50.791289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.048 [2024-09-30 23:01:50.800516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.048 [2024-09-30 23:01:50.801007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.048 [2024-09-30 23:01:50.801028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.048 [2024-09-30 23:01:50.801034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.048 [2024-09-30 23:01:50.801186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.048 [2024-09-30 23:01:50.801337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.048 [2024-09-30 23:01:50.801343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.048 [2024-09-30 23:01:50.801349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.048 [2024-09-30 23:01:50.803766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.048 [2024-09-30 23:01:50.813126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.048 [2024-09-30 23:01:50.813626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.048 [2024-09-30 23:01:50.813641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.048 [2024-09-30 23:01:50.813647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.048 [2024-09-30 23:01:50.813797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.813953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.813960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.813965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.816373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.825725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.826445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.826483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.826492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.826662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.826815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.826821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.826827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.829258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.838334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.838939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.838976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.838984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.839153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.839305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.839311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.839316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.841742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.850947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.851527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.851562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.851570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.851738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.851891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.851907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.851913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.854327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.863530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.864095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.864129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.864137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.864303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.864455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.864462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.864467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.866883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.876232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.876821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.876853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.876865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.877041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.877193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.877199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.877205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.879611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.888957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.889462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.889476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.889482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.889631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.889780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.889786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.889791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.892229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.901568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.902186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.902216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.902224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.902389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.902540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.902546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.902551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.904965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.914166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.914742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.914772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.914781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.914951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.915103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.915114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.915120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.049 [2024-09-30 23:01:50.917524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.049 [2024-09-30 23:01:50.926863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.049 [2024-09-30 23:01:50.927347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.049 [2024-09-30 23:01:50.927362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.049 [2024-09-30 23:01:50.927367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.049 [2024-09-30 23:01:50.927516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.049 [2024-09-30 23:01:50.927665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.049 [2024-09-30 23:01:50.927670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.049 [2024-09-30 23:01:50.927675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.930081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:50.939570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:50.940035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:50.940048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:50.940053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:50.940203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:50.940352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:50.940359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:50.940364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.942763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:50.952246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:50.952719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:50.952731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:50.952736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:50.952885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:50.953040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:50.953047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:50.953051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.955453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:50.964933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:50.965442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:50.965472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:50.965481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:50.965646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:50.965798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:50.965804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:50.965809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.968221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:50.977553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:50.978213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:50.978243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:50.978252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:50.978416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:50.978568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:50.978574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:50.978580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.980991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:50.990193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:50.990683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:50.990698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:50.990704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:50.990853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:50.991009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:50.991017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:50.991022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:50.993421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:51.002861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:51.003402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:51.003431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:51.003440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:51.003608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:51.003759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:51.003766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:51.003771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:51.006195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:51.015538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:51.016158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:51.016188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:51.016197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:51.016361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:51.016513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:51.016519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:51.016524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:51.018940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:51.028132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:51.028703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:51.028732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:51.028741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:51.028913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:51.029065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:51.029071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:51.029076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:51.031482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:51.040836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:51.041425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:51.041455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:51.041464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:51.041628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:51.041779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:51.041785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:51.041794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:51.044210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.050 [2024-09-30 23:01:51.053546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.050 [2024-09-30 23:01:51.054011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.050 [2024-09-30 23:01:51.054040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.050 [2024-09-30 23:01:51.054049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.050 [2024-09-30 23:01:51.054213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.050 [2024-09-30 23:01:51.054365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.050 [2024-09-30 23:01:51.054371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.050 [2024-09-30 23:01:51.054376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.050 [2024-09-30 23:01:51.056790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.066133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.066705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.066735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.066743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.066918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.067071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.067077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.067083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.069490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.078834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.079447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.079477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.079486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.079651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.079802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.079808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.079813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.082254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.091459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.092028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.092062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.092071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.092236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.092387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.092394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.092399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.094810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.104151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.104732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.104762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.104771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.104943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.105096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.105102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.105107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.107522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.116858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.117438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.117468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.117477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.117641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.117793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.117799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.117804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.120216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.129550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.130038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.130053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.130058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.313 [2024-09-30 23:01:51.130208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.313 [2024-09-30 23:01:51.130360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.313 [2024-09-30 23:01:51.130366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.313 [2024-09-30 23:01:51.130371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.313 [2024-09-30 23:01:51.132775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.313 [2024-09-30 23:01:51.142257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.313 [2024-09-30 23:01:51.142736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.313 [2024-09-30 23:01:51.142766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.313 [2024-09-30 23:01:51.142775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.142951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.143103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.143109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.143114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.145520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.154854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.155245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.155259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.155265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.155414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.155562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.155568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.155573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.157978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.167446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.167897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.167910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.167915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.168064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.168213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.168219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.168224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.170636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.180113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.180559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.180570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.180576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.180724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.180873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.180878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.180883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.183288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.192759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.193315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.193345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.193354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.193518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.193669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.193675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.193680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.196094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.205432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.206001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.206030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.206039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.206207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.206359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.206365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.206370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.208788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.218136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.218702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.218731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.218743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.218916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.219068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.219074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.219079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.221486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.230827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.231356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.231371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.231376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.231526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.231674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.231680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.231685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.234093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.243436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.244003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.244033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.244042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.244210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.244361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.244368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.244373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.246782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.256120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.256569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.256583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.256588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.256737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.256886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.256900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.314 [2024-09-30 23:01:51.256905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.314 [2024-09-30 23:01:51.259305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.314 [2024-09-30 23:01:51.268775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.314 [2024-09-30 23:01:51.269324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.314 [2024-09-30 23:01:51.269354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.314 [2024-09-30 23:01:51.269362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.314 [2024-09-30 23:01:51.269527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.314 [2024-09-30 23:01:51.269679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.314 [2024-09-30 23:01:51.269685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.315 [2024-09-30 23:01:51.269690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.315 [2024-09-30 23:01:51.272106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.315 [2024-09-30 23:01:51.281434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.315 [2024-09-30 23:01:51.281921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.315 [2024-09-30 23:01:51.281937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.315 [2024-09-30 23:01:51.281942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.315 [2024-09-30 23:01:51.282092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.315 [2024-09-30 23:01:51.282241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.315 [2024-09-30 23:01:51.282247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.315 [2024-09-30 23:01:51.282251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.315 [2024-09-30 23:01:51.284652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.315 [2024-09-30 23:01:51.294129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.315 [2024-09-30 23:01:51.294577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.315 [2024-09-30 23:01:51.294607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.315 [2024-09-30 23:01:51.294616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.315 [2024-09-30 23:01:51.294783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.315 [2024-09-30 23:01:51.294941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.315 [2024-09-30 23:01:51.294948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.315 [2024-09-30 23:01:51.294953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.315 [2024-09-30 23:01:51.297359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.315 [2024-09-30 23:01:51.306839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.315 [2024-09-30 23:01:51.307399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.315 [2024-09-30 23:01:51.307428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.315 [2024-09-30 23:01:51.307437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.315 [2024-09-30 23:01:51.307611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.315 [2024-09-30 23:01:51.307763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.315 [2024-09-30 23:01:51.307769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.315 [2024-09-30 23:01:51.307775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.315 [2024-09-30 23:01:51.310185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.315 7398.00 IOPS, 28.90 MiB/s [2024-09-30 23:01:51.319526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.315 [2024-09-30 23:01:51.319907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.315 [2024-09-30 23:01:51.319922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.315 [2024-09-30 23:01:51.319928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.315 [2024-09-30 23:01:51.320077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.315 [2024-09-30 23:01:51.320226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.315 [2024-09-30 23:01:51.320232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.315 [2024-09-30 23:01:51.320237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.315 [2024-09-30 23:01:51.322643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.578 [2024-09-30 23:01:51.332130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.578 [2024-09-30 23:01:51.332610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.578 [2024-09-30 23:01:51.332622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.578 [2024-09-30 23:01:51.332627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.578 [2024-09-30 23:01:51.332776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.578 [2024-09-30 23:01:51.332929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.578 [2024-09-30 23:01:51.332935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.578 [2024-09-30 23:01:51.332941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.578 [2024-09-30 23:01:51.335341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.578 [2024-09-30 23:01:51.344826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.578 [2024-09-30 23:01:51.345372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.578 [2024-09-30 23:01:51.345402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.578 [2024-09-30 23:01:51.345411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.578 [2024-09-30 23:01:51.345579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.578 [2024-09-30 23:01:51.345730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.578 [2024-09-30 23:01:51.345737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.578 [2024-09-30 23:01:51.345742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.578 [2024-09-30 23:01:51.348156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.578 [2024-09-30 23:01:51.357500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.578 [2024-09-30 23:01:51.358026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.578 [2024-09-30 23:01:51.358056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.578 [2024-09-30 23:01:51.358065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.578 [2024-09-30 23:01:51.358231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.578 [2024-09-30 23:01:51.358383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.578 [2024-09-30 23:01:51.358389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.578 [2024-09-30 23:01:51.358395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.578 [2024-09-30 23:01:51.360806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.578 [2024-09-30 23:01:51.370156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.578 [2024-09-30 23:01:51.370652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.578 [2024-09-30 23:01:51.370682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.578 [2024-09-30 23:01:51.370691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.578 [2024-09-30 23:01:51.370860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.578 [2024-09-30 23:01:51.371018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.578 [2024-09-30 23:01:51.371024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.578 [2024-09-30 23:01:51.371029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.578 [2024-09-30 23:01:51.373433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.578 [2024-09-30 23:01:51.382762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.578 [2024-09-30 23:01:51.383221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.578 [2024-09-30 23:01:51.383237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.578 [2024-09-30 23:01:51.383243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.578 [2024-09-30 23:01:51.383393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.383542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.383548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.383557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.385966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.395468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.395920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.395934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.395939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.396089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.396237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.396243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.396248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.398659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.408146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.408513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.408525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.408530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.408679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.408827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.408833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.408838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.411243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.420729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.421179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.421192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.421197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.421346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.421494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.421500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.421505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.423909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.433387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.433943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.433973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.433982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.434149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.434300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.434307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.434312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.436722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.446064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.446521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.446535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.446541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.446689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.446838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.446844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.446849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.449253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.458724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.459309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.459339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.459347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.459512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.459663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.459670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.459675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.462084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.471430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.472006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.472036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.472045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.472209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.472364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.472371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.472376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.474787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.484123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.484706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.484735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.484745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.484915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.485067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.485074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.485080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.487485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.496836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.497428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.497458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.497467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.497631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.497783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.497789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.497794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.579 [2024-09-30 23:01:51.500207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.579 [2024-09-30 23:01:51.509546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.579 [2024-09-30 23:01:51.510010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.579 [2024-09-30 23:01:51.510025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.579 [2024-09-30 23:01:51.510031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.579 [2024-09-30 23:01:51.510180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.579 [2024-09-30 23:01:51.510329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.579 [2024-09-30 23:01:51.510335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.579 [2024-09-30 23:01:51.510340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.512745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.522228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.522582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.522595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.522600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.522749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.522902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.522908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.522913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.525310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.534920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.535462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.535492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.535500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.535664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.535816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.535822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.535827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.538237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.547579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.548025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.548040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.548046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.548196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.548344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.548350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.548355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.550753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.560225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.560768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.560779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.560790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.560944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.561094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.561100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.561105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.563502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.572830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.573355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.573368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.573373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.573522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.573670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.573675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.573680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.576082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.580 [2024-09-30 23:01:51.585409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.580 [2024-09-30 23:01:51.585983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.580 [2024-09-30 23:01:51.586013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.580 [2024-09-30 23:01:51.586022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.580 [2024-09-30 23:01:51.586189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.580 [2024-09-30 23:01:51.586340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.580 [2024-09-30 23:01:51.586346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.580 [2024-09-30 23:01:51.586351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.580 [2024-09-30 23:01:51.588761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.598100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.598569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.598584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.598590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.598739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.598887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.598903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.598910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.601310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.610780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.611414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.611444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.611453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.611617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.611769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.611775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.611780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.614190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.623384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.623805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.623835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.623843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.624017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.624170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.624176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.624181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.626583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.636060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.636661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.636691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.636699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.636864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.637024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.637031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.637036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.639445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.648645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.649238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.649269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.649277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.649441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.649593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.649599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.649604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.652017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.661390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.661960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.661989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.661998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.843 [2024-09-30 23:01:51.662165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.843 [2024-09-30 23:01:51.662316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.843 [2024-09-30 23:01:51.662322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.843 [2024-09-30 23:01:51.662328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.843 [2024-09-30 23:01:51.664738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.843 [2024-09-30 23:01:51.674080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.843 [2024-09-30 23:01:51.674655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.843 [2024-09-30 23:01:51.674686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.843 [2024-09-30 23:01:51.674694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.674859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.675018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.675025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.675030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.677435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.686765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.687234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.687249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.687254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.687407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.687556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.687562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.687567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.689976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.699609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.700069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.700083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.700089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.700238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.700387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.700393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.700398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.702798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.712280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.712755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.712767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.712772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.712926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.713075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.713080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.713085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.715483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.724956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.725496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.725526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.725534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.725699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.725851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.725857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.725866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.728280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.737614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.738214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.738244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.738253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.738417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.738569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.738575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.738580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.740997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.750189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.750689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.750704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.750710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.750860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.751013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.751019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.751024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.753425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.762892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.763317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.763347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.763355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.763519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.763671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.763677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.763682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.766095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.775575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.776181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.776211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.776220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.776384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.776535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.776542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.776547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.778956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.788313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.788667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.788681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.788687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.788836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.788989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.788996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.844 [2024-09-30 23:01:51.789000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.844 [2024-09-30 23:01:51.791407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.844 [2024-09-30 23:01:51.801028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.844 [2024-09-30 23:01:51.801397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.844 [2024-09-30 23:01:51.801409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.844 [2024-09-30 23:01:51.801415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.844 [2024-09-30 23:01:51.801563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.844 [2024-09-30 23:01:51.801711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.844 [2024-09-30 23:01:51.801717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.845 [2024-09-30 23:01:51.801722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.845 [2024-09-30 23:01:51.804125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.845 [2024-09-30 23:01:51.813602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.845 [2024-09-30 23:01:51.814068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.845 [2024-09-30 23:01:51.814080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.845 [2024-09-30 23:01:51.814086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.845 [2024-09-30 23:01:51.814234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.845 [2024-09-30 23:01:51.814386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.845 [2024-09-30 23:01:51.814392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.845 [2024-09-30 23:01:51.814397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.845 [2024-09-30 23:01:51.816794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.845 [2024-09-30 23:01:51.826275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.845 [2024-09-30 23:01:51.826809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.845 [2024-09-30 23:01:51.826839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.845 [2024-09-30 23:01:51.826848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.845 [2024-09-30 23:01:51.827021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.845 [2024-09-30 23:01:51.827173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.845 [2024-09-30 23:01:51.827179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.845 [2024-09-30 23:01:51.827184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.845 [2024-09-30 23:01:51.829588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.845 [2024-09-30 23:01:51.838921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.845 [2024-09-30 23:01:51.839420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.845 [2024-09-30 23:01:51.839434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.845 [2024-09-30 23:01:51.839440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.845 [2024-09-30 23:01:51.839593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.845 [2024-09-30 23:01:51.839743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.845 [2024-09-30 23:01:51.839749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.845 [2024-09-30 23:01:51.839754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.845 [2024-09-30 23:01:51.842161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.845 [2024-09-30 23:01:51.851632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.845 [2024-09-30 23:01:51.852113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.845 [2024-09-30 23:01:51.852127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:24.845 [2024-09-30 23:01:51.852132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:24.845 [2024-09-30 23:01:51.852281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:24.845 [2024-09-30 23:01:51.852429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.845 [2024-09-30 23:01:51.852434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.845 [2024-09-30 23:01:51.852439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.845 [2024-09-30 23:01:51.854842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.107 [2024-09-30 23:01:51.864314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.864754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.864765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.864771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.864923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.865072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.865077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.865082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.867480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.876952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.877287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.877299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.877304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.877453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.877600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.877606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.877611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.880012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.889623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.890115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.890127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.890132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.890281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.890429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.890435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.890440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.892838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.902307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.902842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.902872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.902885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.903057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.903210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.903216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.903221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.905626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.914965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.915427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.915441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.915447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.915596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.915744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.915750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.915755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.918165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.927636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.927982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.927998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.928003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.928153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.928302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.928308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.928313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.930712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.940336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.940702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.940714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.940719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.940868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.941021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.941034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.941039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.943440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.952912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.953423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.953454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.953462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.953626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.953778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.953784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.953789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.956199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.965530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.966110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.966140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.966149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.966314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.966465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.966471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.966477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.968884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.978232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.108 [2024-09-30 23:01:51.978800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.108 [2024-09-30 23:01:51.978830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.108 [2024-09-30 23:01:51.978839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.108 [2024-09-30 23:01:51.979012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.108 [2024-09-30 23:01:51.979164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.108 [2024-09-30 23:01:51.979170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.108 [2024-09-30 23:01:51.979175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.108 [2024-09-30 23:01:51.981580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.108 [2024-09-30 23:01:51.990925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:51.991431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:51.991445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:51.991452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:51.991602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:51.991751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:51.991758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:51.991762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:51.994168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.003510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.004041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.004072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.004081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.004246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.004397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.004403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.004409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.006818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.016161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.016652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.016667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.016672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.016821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.016976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.016983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.016988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.019397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.028734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.029393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.029423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.029431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.029601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.029752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.029759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.029764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.032271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.041346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.041938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.041968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.041977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.042145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.042296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.042303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.042308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.044719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.054054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.054596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.054626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.054635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.054799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.054957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.054964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.054969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.057373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.066707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.067267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.067298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.067307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.067471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.067623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.067629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.067638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.070051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.079394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.079949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.079979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.079988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.080152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.080304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.080310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.080315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.082725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.092071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.092636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.092666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.092674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.092839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.092997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.093005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.093011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.095415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.104754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.105226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.105240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.109 [2024-09-30 23:01:52.105246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.109 [2024-09-30 23:01:52.105395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.109 [2024-09-30 23:01:52.105543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.109 [2024-09-30 23:01:52.105549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.109 [2024-09-30 23:01:52.105554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.109 [2024-09-30 23:01:52.107991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.109 [2024-09-30 23:01:52.117327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.109 [2024-09-30 23:01:52.117860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.109 [2024-09-30 23:01:52.117890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.110 [2024-09-30 23:01:52.117912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.110 [2024-09-30 23:01:52.118079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.110 [2024-09-30 23:01:52.118231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.110 [2024-09-30 23:01:52.118237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.110 [2024-09-30 23:01:52.118242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.110 [2024-09-30 23:01:52.120647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.129984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.130479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.130493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.130499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.130648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.130796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.130802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.130807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.133212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.142685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.143231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.143260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.143269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.143433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.143585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.143591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.143597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.146009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.155336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.155925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.155955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.155964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.156132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.156283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.156289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.156295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.158705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.168037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.168603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.168633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.168641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.168806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.168964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.168971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.168976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.171384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.180716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.181308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.181338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.181347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.181511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.181662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.181669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.181674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.184086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.193427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.193991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.194021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.194029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.194196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.373 [2024-09-30 23:01:52.194347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.373 [2024-09-30 23:01:52.194354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.373 [2024-09-30 23:01:52.194359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.373 [2024-09-30 23:01:52.196775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.373 [2024-09-30 23:01:52.206103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.373 [2024-09-30 23:01:52.206585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.373 [2024-09-30 23:01:52.206615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.373 [2024-09-30 23:01:52.206624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.373 [2024-09-30 23:01:52.206790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.206950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.206957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.206962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.209366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.218707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.219300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.219330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.219339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.219503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.219654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.219660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.219665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.222075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.231413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.231975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.232006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.232014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.232182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.232333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.232339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.232344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.234755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.244091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.244605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.244620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.244629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.244778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.244933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.244939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.244945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.247346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.256672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.257148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.257161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.257166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.257315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.257464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.257469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.257474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.259872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.269334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.269779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.269790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.269795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.269950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.270099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.270105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.270110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.272512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.281985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.282524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.282554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.282563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.282727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.282879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.282888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.282900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.285305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.294640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.295200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.295230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.295238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.295403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.295554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.295560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.295566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.297977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.307303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.307865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.307900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.307909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.308073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.308225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.308231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.308236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.310651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 5918.40 IOPS, 23.12 MiB/s [2024-09-30 23:01:52.320005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.320576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.320606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.374 [2024-09-30 23:01:52.320614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.374 [2024-09-30 23:01:52.320779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.374 [2024-09-30 23:01:52.320939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.374 [2024-09-30 23:01:52.320946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.374 [2024-09-30 23:01:52.320951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.374 [2024-09-30 23:01:52.323356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.374 [2024-09-30 23:01:52.332683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.374 [2024-09-30 23:01:52.333229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.374 [2024-09-30 23:01:52.333259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.375 [2024-09-30 23:01:52.333268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.375 [2024-09-30 23:01:52.333432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.375 [2024-09-30 23:01:52.333584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.375 [2024-09-30 23:01:52.333590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.375 [2024-09-30 23:01:52.333595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.375 [2024-09-30 23:01:52.336007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.375 [2024-09-30 23:01:52.345339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.375 [2024-09-30 23:01:52.345870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.375 [2024-09-30 23:01:52.345905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.375 [2024-09-30 23:01:52.345914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.375 [2024-09-30 23:01:52.346079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.375 [2024-09-30 23:01:52.346230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.375 [2024-09-30 23:01:52.346236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.375 [2024-09-30 23:01:52.346241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.375 [2024-09-30 23:01:52.348647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.375 [2024-09-30 23:01:52.357978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.375 [2024-09-30 23:01:52.358563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.375 [2024-09-30 23:01:52.358592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.375 [2024-09-30 23:01:52.358601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.375 [2024-09-30 23:01:52.358766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.375 [2024-09-30 23:01:52.358925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.375 [2024-09-30 23:01:52.358932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.375 [2024-09-30 23:01:52.358937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.375 [2024-09-30 23:01:52.361342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.375 [2024-09-30 23:01:52.370681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.375 [2024-09-30 23:01:52.371249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.375 [2024-09-30 23:01:52.371279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.375 [2024-09-30 23:01:52.371290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.375 [2024-09-30 23:01:52.371455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.375 [2024-09-30 23:01:52.371606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.375 [2024-09-30 23:01:52.371613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.375 [2024-09-30 23:01:52.371618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.375 [2024-09-30 23:01:52.374031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.375 [2024-09-30 23:01:52.383357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.375 [2024-09-30 23:01:52.383921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.375 [2024-09-30 23:01:52.383951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.375 [2024-09-30 23:01:52.383960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.375 [2024-09-30 23:01:52.384126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.375 [2024-09-30 23:01:52.384277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.375 [2024-09-30 23:01:52.384284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.375 [2024-09-30 23:01:52.384289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.375 [2024-09-30 23:01:52.386701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.396041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.396521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.396550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.396558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.396725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.396877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.644 [2024-09-30 23:01:52.396883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.644 [2024-09-30 23:01:52.396888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.644 [2024-09-30 23:01:52.399302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.408629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.409197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.409227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.409236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.409400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.409552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.644 [2024-09-30 23:01:52.409561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.644 [2024-09-30 23:01:52.409566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.644 [2024-09-30 23:01:52.411988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.421323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.421786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.421816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.421824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.421996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.422148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.644 [2024-09-30 23:01:52.422154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.644 [2024-09-30 23:01:52.422159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.644 [2024-09-30 23:01:52.424562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.434027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.434594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.434624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.434632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.434797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.434956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.644 [2024-09-30 23:01:52.434963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.644 [2024-09-30 23:01:52.434968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.644 [2024-09-30 23:01:52.437372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.446704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.447258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.447287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.447296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.447461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.447612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.644 [2024-09-30 23:01:52.447618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.644 [2024-09-30 23:01:52.447624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.644 [2024-09-30 23:01:52.450035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.644 [2024-09-30 23:01:52.459359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.644 [2024-09-30 23:01:52.459917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.644 [2024-09-30 23:01:52.459947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.644 [2024-09-30 23:01:52.459956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.644 [2024-09-30 23:01:52.460123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.644 [2024-09-30 23:01:52.460275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.460281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.460286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.462698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.472030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.472535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.472549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.472555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.472704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.472853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.472858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.472863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.475267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.484726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.485297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.485327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.485336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.485500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.485651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.485657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.485662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.488076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.497438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.498007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.498037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.498045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.498216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.498368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.498374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.498380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.500790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.510120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.510687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.510716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.510725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.510889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.511055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.511062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.511067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.513470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.522805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.523386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.523416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.523424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.523589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.523740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.523746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.523752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.526164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.535491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.536031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.536062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.536071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.536237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.536389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.536395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.536404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.538816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.548156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.548632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.548646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.548652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.548801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.548955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.548961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.548966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.551367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.560829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.561275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.561288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.561293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.561442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.561590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.561595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.561600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.564001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.573464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.573951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.573964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.573969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.574118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.574267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.574273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.574277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.645 [2024-09-30 23:01:52.576677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.645 [2024-09-30 23:01:52.586137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.645 [2024-09-30 23:01:52.586675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.645 [2024-09-30 23:01:52.586711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.645 [2024-09-30 23:01:52.586719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.645 [2024-09-30 23:01:52.586884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.645 [2024-09-30 23:01:52.587042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.645 [2024-09-30 23:01:52.587049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.645 [2024-09-30 23:01:52.587054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.646 [2024-09-30 23:01:52.589458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.646 [2024-09-30 23:01:52.598789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.646 [2024-09-30 23:01:52.599337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.646 [2024-09-30 23:01:52.599367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.646 [2024-09-30 23:01:52.599376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.646 [2024-09-30 23:01:52.599541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.646 [2024-09-30 23:01:52.599692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.646 [2024-09-30 23:01:52.599698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.646 [2024-09-30 23:01:52.599703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.646 [2024-09-30 23:01:52.602115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.646 [2024-09-30 23:01:52.611446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.646 [2024-09-30 23:01:52.611994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.646 [2024-09-30 23:01:52.612024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.646 [2024-09-30 23:01:52.612033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.646 [2024-09-30 23:01:52.612197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.646 [2024-09-30 23:01:52.612349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.646 [2024-09-30 23:01:52.612355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.646 [2024-09-30 23:01:52.612360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.646 [2024-09-30 23:01:52.614771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.646 [2024-09-30 23:01:52.624104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.646 [2024-09-30 23:01:52.624656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.646 [2024-09-30 23:01:52.624685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.646 [2024-09-30 23:01:52.624694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.646 [2024-09-30 23:01:52.624859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.646 [2024-09-30 23:01:52.625021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.646 [2024-09-30 23:01:52.625028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.646 [2024-09-30 23:01:52.625034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.646 [2024-09-30 23:01:52.627437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.646 [2024-09-30 23:01:52.636764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.646 [2024-09-30 23:01:52.637313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.646 [2024-09-30 23:01:52.637344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.646 [2024-09-30 23:01:52.637353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.646 [2024-09-30 23:01:52.637517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.646 [2024-09-30 23:01:52.637669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.646 [2024-09-30 23:01:52.637674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.646 [2024-09-30 23:01:52.637680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.646 [2024-09-30 23:01:52.640089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.646 [2024-09-30 23:01:52.649428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.646 [2024-09-30 23:01:52.649997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.646 [2024-09-30 23:01:52.650027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.646 [2024-09-30 23:01:52.650036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.646 [2024-09-30 23:01:52.650201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.646 [2024-09-30 23:01:52.650352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.646 [2024-09-30 23:01:52.650358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.646 [2024-09-30 23:01:52.650363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.652773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.662108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.662649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.662680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.662689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.662853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.663012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.663019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.663025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.665433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.674772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.675354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.675384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.675393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.675557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.675708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.675715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.675720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.678134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.687465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.687934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.687949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.687955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.688104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.688253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.688259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.688264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.690669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.700299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.700868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.700903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.700912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.701077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.701228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.701234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.701239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.703645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.712983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.713521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.713551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.713562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.713727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.713878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.713884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.713890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.716301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.725634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.726230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.726260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.726269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.726433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.726584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.726591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.951 [2024-09-30 23:01:52.726596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.951 [2024-09-30 23:01:52.729007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.951 [2024-09-30 23:01:52.738334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.951 [2024-09-30 23:01:52.738920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.951 [2024-09-30 23:01:52.738950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.951 [2024-09-30 23:01:52.738958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.951 [2024-09-30 23:01:52.739123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.951 [2024-09-30 23:01:52.739274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.951 [2024-09-30 23:01:52.739280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.739286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.741702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.751035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.751617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.751646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.751655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.751820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.751981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.751992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.751997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.754403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.763732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.764311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.764342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.764351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.764516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.764668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.764673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.764679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.767088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.776423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.776911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.776926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.776932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.777081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.777230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.777235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.777240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.779643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.789107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.789671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.789700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.789709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.789873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.790033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.790040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.790045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.792454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.801781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.802264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.802294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.802302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.802467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.802618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.802624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.802629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.805041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.814449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.814996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.815026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.815035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.815202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.815353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.815359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.815365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.817777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.827112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.952 [2024-09-30 23:01:52.827690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.952 [2024-09-30 23:01:52.827720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.952 [2024-09-30 23:01:52.827728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.952 [2024-09-30 23:01:52.827892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.952 [2024-09-30 23:01:52.828052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.952 [2024-09-30 23:01:52.828058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.952 [2024-09-30 23:01:52.828063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.952 [2024-09-30 23:01:52.830466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.952 [2024-09-30 23:01:52.839792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.840273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.840288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.840293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.840446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.840595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.840600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.840606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.843017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.852478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.853094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.853125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.853133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.853298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.853449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.853455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.853460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.855870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.865058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.865677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.865707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.865716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.865880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.866039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.866046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.866052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.868455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.877644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.878210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.878240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.878249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.878414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.878565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.878571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.878579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.880992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.890315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.890878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.890917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.890928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.891093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.891245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.891251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.891256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.893663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.902989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.903565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.903595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.903604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.903768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.903927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.903934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.903939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.906344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.915676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.916245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.916275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.916284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.953 [2024-09-30 23:01:52.916449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.953 [2024-09-30 23:01:52.916600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.953 [2024-09-30 23:01:52.916606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.953 [2024-09-30 23:01:52.916611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.953 [2024-09-30 23:01:52.919023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.953 [2024-09-30 23:01:52.928353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.953 [2024-09-30 23:01:52.928948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.953 [2024-09-30 23:01:52.928981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.953 [2024-09-30 23:01:52.928989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.954 [2024-09-30 23:01:52.929154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.954 [2024-09-30 23:01:52.929305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.954 [2024-09-30 23:01:52.929311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.954 [2024-09-30 23:01:52.929317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.954 [2024-09-30 23:01:52.931729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.954 [2024-09-30 23:01:52.940927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.954 [2024-09-30 23:01:52.941496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.954 [2024-09-30 23:01:52.941526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.954 [2024-09-30 23:01:52.941535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.954 [2024-09-30 23:01:52.941702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.954 [2024-09-30 23:01:52.941853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.954 [2024-09-30 23:01:52.941859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.954 [2024-09-30 23:01:52.941864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.954 [2024-09-30 23:01:52.944278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:25.954 [2024-09-30 23:01:52.953612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.954 [2024-09-30 23:01:52.954273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.954 [2024-09-30 23:01:52.954303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:25.954 [2024-09-30 23:01:52.954312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:25.954 [2024-09-30 23:01:52.954477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:25.954 [2024-09-30 23:01:52.954628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:25.954 [2024-09-30 23:01:52.954634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:25.954 [2024-09-30 23:01:52.954639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:52.957052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 891318 Killed "${NVMF_APP[@]}" "$@" 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.237 [2024-09-30 23:01:52.966242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:52.966693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:52.966707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:52.966712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:52.966861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:52.967015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:52.967022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:52.967026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:52.969426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=893005 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 893005 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 893005 ']' 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.237 23:01:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.237 [2024-09-30 23:01:52.978922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:52.979484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:52.979514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:52.979522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:52.979687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:52.979838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:52.979844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:52.979850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:52.982263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:52.991606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:52.992261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:52.992291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:52.992300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:52.992465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:52.992620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:52.992626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:52.992631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:52.995040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:53.004229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:53.004573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:53.004587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:53.004593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:53.004742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:53.004891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:53.004902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:53.004907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:53.007308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:53.016928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:53.017287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:53.017299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:53.017304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:53.017454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:53.017602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:53.017608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:53.017613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:53.020016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:53.024932] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:26.237 [2024-09-30 23:01:53.024968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.237 [2024-09-30 23:01:53.029634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:53.029980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:53.029992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:53.029998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:53.030147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:53.030299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:53.030305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:53.030310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:53.032709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:53.042332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:53.042760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:53.042772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:53.042778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:53.042931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:53.043080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.237 [2024-09-30 23:01:53.043086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.237 [2024-09-30 23:01:53.043092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.237 [2024-09-30 23:01:53.045489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.237 [2024-09-30 23:01:53.054963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.237 [2024-09-30 23:01:53.055521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.237 [2024-09-30 23:01:53.055553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.237 [2024-09-30 23:01:53.055562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.237 [2024-09-30 23:01:53.055729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.237 [2024-09-30 23:01:53.055881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.055887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.055899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.058303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.067589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.068219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.068249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.068258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.068424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.068576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.068582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.068587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.071004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.080212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.080800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.080830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.080839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.081013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.081165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.081172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.081177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.083583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.092934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.093425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.093440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.093446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.093596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.093746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.093752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.093757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.096161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.099517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:26.238 [2024-09-30 23:01:53.105637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.106121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.106134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.106140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.106290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.106439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.106445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.106450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.108850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.118341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.118921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.118952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.118966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.119135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.119287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.119293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.119299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.121707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.130923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.131417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.131433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.131439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.131590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.131740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.131746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.131751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.134158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.143661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.144163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.144194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.144203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.144373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.144525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.144531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.144537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.146946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.153473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.238 [2024-09-30 23:01:53.153497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.238 [2024-09-30 23:01:53.153503] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.238 [2024-09-30 23:01:53.153509] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.238 [2024-09-30 23:01:53.153513] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.238 [2024-09-30 23:01:53.153688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.238 [2024-09-30 23:01:53.153843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.238 [2024-09-30 23:01:53.153845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.238 [2024-09-30 23:01:53.156298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.156800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.156815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.156820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.156975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.157124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.157130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.157135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.159535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.168875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.238 [2024-09-30 23:01:53.169237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.238 [2024-09-30 23:01:53.169250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.238 [2024-09-30 23:01:53.169256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.238 [2024-09-30 23:01:53.169405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.238 [2024-09-30 23:01:53.169555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.238 [2024-09-30 23:01:53.169561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.238 [2024-09-30 23:01:53.169566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.238 [2024-09-30 23:01:53.171977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.238 [2024-09-30 23:01:53.181462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.181912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.181926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.181932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.182081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.182230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.182236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.182241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.184642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.239 [2024-09-30 23:01:53.194133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.194614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.194627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.194639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.194788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.194942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.194949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.194954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.197352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.239 [2024-09-30 23:01:53.206826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.207396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.207429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.207439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.207611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.207762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.207769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.207774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.210187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.239 [2024-09-30 23:01:53.219536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.220021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.220051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.220060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.220229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.220380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.220386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.220392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.222802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.239 [2024-09-30 23:01:53.232162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.232510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.232526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.232532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.232684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.232833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.232845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.232850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.235259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.239 [2024-09-30 23:01:53.244741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.239 [2024-09-30 23:01:53.245339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.239 [2024-09-30 23:01:53.245369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.239 [2024-09-30 23:01:53.245378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.239 [2024-09-30 23:01:53.245543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.239 [2024-09-30 23:01:53.245695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.239 [2024-09-30 23:01:53.245702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.239 [2024-09-30 23:01:53.245707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.239 [2024-09-30 23:01:53.248115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.257456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.257981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.258011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.258020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.258187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.258339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.258345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.258351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.260760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.270101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.270612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.270627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.270633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.270782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.270937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.270945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.270951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.273356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.282693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.283272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.283302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.283311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.283477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.283628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.283635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.283640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.286054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.295396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.295927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.295942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.295948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.296098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.296247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.296253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.296258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.298660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.307996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.308566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.308597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.308605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.308770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.308929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.308936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.308942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.311346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 4932.00 IOPS, 19.27 MiB/s [2024-09-30 23:01:53.321541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.322016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.322032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.322038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.322191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.322339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.322345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.322350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.324762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.334236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.334688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.334700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.334705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.334854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.335006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.335012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.335017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.337414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.346887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.347340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.347353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.347358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.347507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.347655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.347661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.347666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.350067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.359536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.359845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.359857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.359862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.360013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.502 [2024-09-30 23:01:53.360163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.502 [2024-09-30 23:01:53.360168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.502 [2024-09-30 23:01:53.360176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.502 [2024-09-30 23:01:53.362573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.502 [2024-09-30 23:01:53.372191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.502 [2024-09-30 23:01:53.372607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.502 [2024-09-30 23:01:53.372638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.502 [2024-09-30 23:01:53.372646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.502 [2024-09-30 23:01:53.372811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.372969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.372976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.372981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.375385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.384862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.385487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.385518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.385526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.385691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.385843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.385850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.385855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.388264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.397465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.397915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.397946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.397955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.398122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.398274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.398280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.398285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.400693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.410171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.410662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.410692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.410700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.410865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.411022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.411029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.411036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.413441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.422786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.423408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.423439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.423448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.423613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.423764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.423770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.423775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.426187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.435379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.435879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.435898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.435904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.436054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.436202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.436208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.436213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.438611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.448091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.448551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.448563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.448569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.448717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.448870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.448876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.448880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.451342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.460676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.461024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.461037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.461042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.461191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.461341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.461347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.461352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.463749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.473367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.473814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.473826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.473832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.473985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.474143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.474149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.474154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.476551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.486020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.486603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.486632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.486641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.486806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.486964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.486971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.503 [2024-09-30 23:01:53.486976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.503 [2024-09-30 23:01:53.489383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.503 [2024-09-30 23:01:53.498725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.503 [2024-09-30 23:01:53.499335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.503 [2024-09-30 23:01:53.499366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.503 [2024-09-30 23:01:53.499375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.503 [2024-09-30 23:01:53.499539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.503 [2024-09-30 23:01:53.499691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.503 [2024-09-30 23:01:53.499697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.504 [2024-09-30 23:01:53.499702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.504 [2024-09-30 23:01:53.502114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.504 [2024-09-30 23:01:53.511314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.504 [2024-09-30 23:01:53.511794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.504 [2024-09-30 23:01:53.511809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.504 [2024-09-30 23:01:53.511815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.504 [2024-09-30 23:01:53.511969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.504 [2024-09-30 23:01:53.512118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.504 [2024-09-30 23:01:53.512124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.504 [2024-09-30 23:01:53.512129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.504 [2024-09-30 23:01:53.514536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.766 [2024-09-30 23:01:53.524027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.766 [2024-09-30 23:01:53.524487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.766 [2024-09-30 23:01:53.524500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.766 [2024-09-30 23:01:53.524506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.766 [2024-09-30 23:01:53.524656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.766 [2024-09-30 23:01:53.524806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.766 [2024-09-30 23:01:53.524813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.766 [2024-09-30 23:01:53.524818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.766 [2024-09-30 23:01:53.527219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.766 [2024-09-30 23:01:53.536692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.766 [2024-09-30 23:01:53.537146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.766 [2024-09-30 23:01:53.537159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.766 [2024-09-30 23:01:53.537169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.766 [2024-09-30 23:01:53.537317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.766 [2024-09-30 23:01:53.537466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.766 [2024-09-30 23:01:53.537472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.766 [2024-09-30 23:01:53.537476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.766 [2024-09-30 23:01:53.539877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.766 [2024-09-30 23:01:53.549358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.766 [2024-09-30 23:01:53.549925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.766 [2024-09-30 23:01:53.549956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.766 [2024-09-30 23:01:53.549964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.766 [2024-09-30 23:01:53.550132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.766 [2024-09-30 23:01:53.550284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.766 [2024-09-30 23:01:53.550290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.766 [2024-09-30 23:01:53.550295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.766 [2024-09-30 23:01:53.552704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.766 [2024-09-30 23:01:53.562038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.766 [2024-09-30 23:01:53.562667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.766 [2024-09-30 23:01:53.562697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.766 [2024-09-30 23:01:53.562706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.562871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.563029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.563036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.563041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.565444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.574651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.575236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.575267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.575276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.575441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.575592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.575603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.575608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.578018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.587396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.587897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.587912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.587917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.588066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.588216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.588222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.588226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.590628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.600106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.600555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.600585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.600594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.600759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.600916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.600923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.600928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.603334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.612822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.613407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.613437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.613446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.613611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.613763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.613769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.613774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.616192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.625405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.625918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.625934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.625940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.626089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.626238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.626244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.626249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.628649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.637986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.638441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.638471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.638480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.638645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.638796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.638802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.638808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.641219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.650616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.651219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.651249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.651258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.651423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.651575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.651581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.651586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.653996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.663333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.663558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.663577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.663583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.663742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.663892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.663906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.663910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.666311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.675935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.676409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.676440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.676448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.676613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.676765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.676771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.767 [2024-09-30 23:01:53.676777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.767 [2024-09-30 23:01:53.679189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.767 [2024-09-30 23:01:53.688525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.767 [2024-09-30 23:01:53.689043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.767 [2024-09-30 23:01:53.689073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.767 [2024-09-30 23:01:53.689082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.767 [2024-09-30 23:01:53.689249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.767 [2024-09-30 23:01:53.689401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.767 [2024-09-30 23:01:53.689407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.689412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.691823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.701198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.701675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.701691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.701697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.701846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.702000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.702007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.702016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.704418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.713891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.714382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.714395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.714401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.714549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.714705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.714711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.714716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.717120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.726602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.727038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.727068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.727077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.727244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.727396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.727402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.727407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.729817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.739297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.739792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.739808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.739814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.739967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.740117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.740122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.740127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.742531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.751869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.752378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.752391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.752396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.752545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.752694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.752700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.752705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.755107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.764579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.765125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.765155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.765164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.765329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.765482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.765489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.765495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.767905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.768 [2024-09-30 23:01:53.777250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.768 [2024-09-30 23:01:53.777757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.768 [2024-09-30 23:01:53.777772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:26.768 [2024-09-30 23:01:53.777778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:26.768 [2024-09-30 23:01:53.777932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:26.768 [2024-09-30 23:01:53.778082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.768 [2024-09-30 23:01:53.778088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.768 [2024-09-30 23:01:53.778093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.768 [2024-09-30 23:01:53.780494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 [2024-09-30 23:01:53.789832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.790376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.790388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.790393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.790547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.790698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.790705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.790710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.793116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 [2024-09-30 23:01:53.802446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.802998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.803028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.803037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.803204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.803355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.803361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.803367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.805777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 [2024-09-30 23:01:53.815111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.815708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.815738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.815747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.815920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.816072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.816078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.816083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.818485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.030 [2024-09-30 23:01:53.827683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.828255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.828285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.828294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.828464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.828616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.828624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.828629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.831043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 [2024-09-30 23:01:53.840322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.840821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.840836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.840842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.840995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.841144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.841150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.841155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.843559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 [2024-09-30 23:01:53.852899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.853422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.853435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.853440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.853589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.030 [2024-09-30 23:01:53.853737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.030 [2024-09-30 23:01:53.853743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.030 [2024-09-30 23:01:53.853748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.030 [2024-09-30 23:01:53.856151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.030 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.030 [2024-09-30 23:01:53.865477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.030 [2024-09-30 23:01:53.865995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.030 [2024-09-30 23:01:53.866026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.030 [2024-09-30 23:01:53.866035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.030 [2024-09-30 23:01:53.866209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.866360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.866367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.866372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 [2024-09-30 23:01:53.868781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 [2024-09-30 23:01:53.870001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.031 [2024-09-30 23:01:53.878116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 [2024-09-30 23:01:53.878662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.031 [2024-09-30 23:01:53.878692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.031 [2024-09-30 23:01:53.878701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.031 [2024-09-30 23:01:53.878866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.879024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.879031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.879036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 [2024-09-30 23:01:53.881438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 [2024-09-30 23:01:53.890768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 [2024-09-30 23:01:53.891096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.031 [2024-09-30 23:01:53.891111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.031 [2024-09-30 23:01:53.891117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.031 [2024-09-30 23:01:53.891266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.891415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.891421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.891425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 [2024-09-30 23:01:53.893829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 [2024-09-30 23:01:53.903441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 [2024-09-30 23:01:53.903913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.031 [2024-09-30 23:01:53.903926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.031 [2024-09-30 23:01:53.903932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.031 [2024-09-30 23:01:53.904085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.904234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.904240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.904245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 Malloc0 00:33:27.031 [2024-09-30 23:01:53.906643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 [2024-09-30 23:01:53.916124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 [2024-09-30 23:01:53.916672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.031 [2024-09-30 23:01:53.916703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.031 [2024-09-30 23:01:53.916712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.031 [2024-09-30 23:01:53.916877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.917035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.917042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.917047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.031 [2024-09-30 23:01:53.919451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 [2024-09-30 23:01:53.928783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 [2024-09-30 23:01:53.929256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.031 [2024-09-30 23:01:53.929271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9212a0 with addr=10.0.0.2, port=4420 00:33:27.031 [2024-09-30 23:01:53.929277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9212a0 is same with the state(6) to be set 00:33:27.031 [2024-09-30 23:01:53.929426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9212a0 (9): Bad file descriptor 00:33:27.031 [2024-09-30 23:01:53.929574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.031 [2024-09-30 23:01:53.929580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.031 [2024-09-30 23:01:53.929585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.031 [2024-09-30 23:01:53.931984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 [2024-09-30 23:01:53.938380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.031 [2024-09-30 23:01:53.941447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.031 23:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 891988 00:33:27.031 [2024-09-30 23:01:53.971620] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:35.653 4874.00 IOPS, 19.04 MiB/s 5889.88 IOPS, 23.01 MiB/s 6696.78 IOPS, 26.16 MiB/s 7331.80 IOPS, 28.64 MiB/s 7838.73 IOPS, 30.62 MiB/s 8278.67 IOPS, 32.34 MiB/s 8655.62 IOPS, 33.81 MiB/s 8972.79 IOPS, 35.05 MiB/s 9247.00 IOPS, 36.12 MiB/s 00:33:35.653 Latency(us) 00:33:35.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.653 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:35.653 Verification LBA range: start 0x0 length 0x4000 00:33:35.653 Nvme1n1 : 15.01 9250.95 36.14 13545.48 0.00 5597.12 546.13 17148.59 00:33:35.653 =================================================================================================================== 00:33:35.653 Total : 9250.95 36.14 13545.48 0.00 5597.12 546.13 17148.59 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.653 rmmod nvme_tcp 00:33:35.653 rmmod nvme_fabrics 00:33:35.653 rmmod nvme_keyring 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 893005 ']' 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 893005 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 893005 ']' 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 893005 00:33:35.653 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893005 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893005' 00:33:35.654 killing process with pid 893005 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 893005 00:33:35.654 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 893005 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.914 23:02:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.825 23:02:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.825 00:33:37.825 real 0m28.533s 00:33:37.825 user 1m3.493s 00:33:37.825 sys 0m7.858s 00:33:37.825 23:02:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.825 23:02:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:37.825 ************************************ 00:33:37.825 END TEST nvmf_bdevperf 00:33:37.825 ************************************ 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.086 ************************************ 00:33:38.086 START TEST nvmf_target_disconnect 00:33:38.086 ************************************ 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:38.086 * Looking for test storage... 00:33:38.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.086 23:02:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.086 --rc genhtml_branch_coverage=1 00:33:38.086 --rc genhtml_function_coverage=1 00:33:38.086 --rc genhtml_legend=1 00:33:38.086 --rc geninfo_all_blocks=1 00:33:38.086 --rc geninfo_unexecuted_blocks=1 00:33:38.086 00:33:38.086 ' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.086 --rc genhtml_branch_coverage=1 00:33:38.086 --rc genhtml_function_coverage=1 00:33:38.086 --rc genhtml_legend=1 00:33:38.086 --rc geninfo_all_blocks=1 00:33:38.086 --rc geninfo_unexecuted_blocks=1 00:33:38.086 00:33:38.086 ' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.086 --rc genhtml_branch_coverage=1 00:33:38.086 --rc genhtml_function_coverage=1 00:33:38.086 --rc genhtml_legend=1 00:33:38.086 --rc geninfo_all_blocks=1 00:33:38.086 --rc geninfo_unexecuted_blocks=1 00:33:38.086 00:33:38.086 ' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.086 --rc genhtml_branch_coverage=1 00:33:38.086 --rc genhtml_function_coverage=1 00:33:38.086 --rc genhtml_legend=1 00:33:38.086 --rc geninfo_all_blocks=1 00:33:38.086 --rc geninfo_unexecuted_blocks=1 00:33:38.086 00:33:38.086 ' 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.086 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.347 23:02:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:46.492 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:46.492 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:46.492 Found net devices under 0000:31:00.0: cvl_0_0 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:46.492 Found net devices under 0000:31:00.1: cvl_0_1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:33:46.492 00:33:46.492 --- 10.0.0.2 ping statistics --- 00:33:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.492 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:33:46.492 00:33:46.492 --- 10.0.0.1 ping statistics --- 00:33:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.492 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.492 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:46.493 ************************************ 00:33:46.493 START TEST nvmf_target_disconnect_tc1 00:33:46.493 ************************************ 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:46.493 23:02:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:46.493 [2024-09-30 23:02:13.001456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.493 [2024-09-30 23:02:13.001524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9cdc0 with addr=10.0.0.2, port=4420 00:33:46.493 [2024-09-30 23:02:13.001562] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:46.493 [2024-09-30 23:02:13.001574] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:46.493 [2024-09-30 23:02:13.001582] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:46.493 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:46.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:46.493 Initializing NVMe Controllers 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:46.493 00:33:46.493 real 0m0.135s 00:33:46.493 user 0m0.055s 00:33:46.493 sys 0m0.079s 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:46.493 ************************************ 00:33:46.493 END TEST nvmf_target_disconnect_tc1 00:33:46.493 ************************************ 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:46.493 ************************************ 00:33:46.493 START TEST nvmf_target_disconnect_tc2 00:33:46.493 ************************************ 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=899285 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 899285 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 899285 ']' 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.493 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:46.493 [2024-09-30 23:02:13.165911] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:46.493 [2024-09-30 23:02:13.165970] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.493 [2024-09-30 23:02:13.257439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:46.493 [2024-09-30 23:02:13.353776] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.493 [2024-09-30 23:02:13.353836] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.493 [2024-09-30 23:02:13.353845] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.493 [2024-09-30 23:02:13.353853] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.493 [2024-09-30 23:02:13.353859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.493 [2024-09-30 23:02:13.354016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:33:46.493 [2024-09-30 23:02:13.354302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:33:46.493 [2024-09-30 23:02:13.354466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:33:46.493 [2024-09-30 23:02:13.354468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:47.065 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.065 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:47.065 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:47.065 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.065 23:02:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.065 Malloc0 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.065 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.065 [2024-09-30 23:02:14.075939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.325 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.326 [2024-09-30 23:02:14.116366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=899485 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:47.326 23:02:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.245 23:02:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 899285 00:33:49.245 23:02:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Write completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 [2024-09-30 23:02:16.154494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.245 Read completed with error (sct=0, sc=8) 00:33:49.245 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Read completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 Write completed with error (sct=0, sc=8) 00:33:49.246 starting I/O failed 00:33:49.246 [2024-09-30 23:02:16.154873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:49.246 [2024-09-30 23:02:16.155401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.155466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.155668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.155682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.156018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.156032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.156326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.156337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.156664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.156676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.157177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.157234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.157464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.157477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.157707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.157718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.158092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.158104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.158245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.158259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.158452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.158464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.158685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.158696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.159010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.159023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.159126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.159138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.159486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.159498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.159830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.159842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.160171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.160183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.160556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.160568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.160777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.160789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.161010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.161022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.161249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.161261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.161571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.161582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.161920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.161941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.162342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.162353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.162703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.162715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.163045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.163058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.163373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.163385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.163736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.163748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.163941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.163955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.164289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.164301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.165451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.165464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.165836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.165848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.166228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.166241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.166562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.166574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.166912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.166924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.167144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.167155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.167501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.167511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.167835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.167846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.168037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.168049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.168364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.168374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.168773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.168785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.169158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.169168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.169563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.169573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.169890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.169920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.170157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.170169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.170529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.170539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.170843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.170862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.171200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.171211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.171531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.171542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.171859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.171870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.172171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.172182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.172402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.172412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.172731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.172742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.172944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.172954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.173234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.173244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.173437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.173448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.173737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.173747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.174080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.174090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.174407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.174417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.174796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.174806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.175146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.175157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.175503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.175514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.175912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.175926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-09-30 23:02:16.176243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-09-30 23:02:16.176254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.176596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.176606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.176922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.176935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.177257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.177269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.177667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.177681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.177997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.178010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.178404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.178418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.178754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.178767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.179165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.179179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.179504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.179518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.179876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.179889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.180085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.180099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.180422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.180434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.180728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.180741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.181069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.181082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.181407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.181420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.181727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.181739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.182072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.182085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.182404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.182416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.182810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.182824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.183104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.183117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.183439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.183452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.183770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.183783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.184114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.184127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.184459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.184472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.184785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.184799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.185119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.185133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.185445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.185458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.185652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.185665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.185983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.185996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.186195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.186207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.186563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.186575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.186873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.186886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.187102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.187115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.187455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.187468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.187831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.187845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.188177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.188190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.188511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.188524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.188869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.188887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.189240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.189261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.189601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.189619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.189824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.189842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.190210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.190228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.190548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.190565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.190890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.190926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.191254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.191270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.191603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.191620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.191940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.191958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.192292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.192309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.192633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.192650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.192974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.192991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.193343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.193359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.193671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.193688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.194016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.194034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.194357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.194374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.194704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.194720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.195106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.195124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.195476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.195493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.195825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.195842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.196187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.196205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.196546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.196562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.196907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.196925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.197261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.197278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.197607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.197625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.197965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.197982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.198309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.198326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.198642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.198659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.198987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.199005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.199348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.199364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.199687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.199708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.200048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.200070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.200395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.200418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.200744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-09-30 23:02:16.200765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-09-30 23:02:16.201068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.201090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.201438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.201460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.201790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.201812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.202152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.202175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.202445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.202465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.202674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.202696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.203044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.203070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.203392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.203414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.203732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.203752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.203966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.203989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.204356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.204377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.204724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.204745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.205082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.205104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.205439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.205467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.205803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.205824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.206152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.206174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.206514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.206535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.206921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.206943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.207279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.207301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.207637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.207659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.208027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.208049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.208380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.208402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.208606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.208629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.208866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.208888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.209257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.209279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.209495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.209516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.209912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.209934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.210246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.210267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.210637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.210658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.210986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.211008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.211349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.211378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.211744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.211772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.212212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.212242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.212511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.212540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.212905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.212935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.213268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.213297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.213593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.213622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.213976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.214006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.214392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.214420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.214860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.214888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.215259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.215288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.215642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.215670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.216036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.216067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.216419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.216448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.216702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.216731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.217087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.217118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.217454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.217482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.217833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.217863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.218254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.218284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.218630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.218660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.219014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.219044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.219413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.219441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.219791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.219819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.220076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.220110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.220486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.220514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.220882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.220920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.221307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.221336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.221569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.221597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.221821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.221853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.222221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.222251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.222606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.222636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.222852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.222883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.223260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.223290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.223647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.223676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.224053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-09-30 23:02:16.224083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-09-30 23:02:16.224437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.224466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.224829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.224858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.225254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.225284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.225652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.225681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.226039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.226070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.226414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.226443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.226799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.226827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.227167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.227197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.227450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.227485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.227734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.227762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.228031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.228062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.228434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.228461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.228830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.228860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.229209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.229239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.229601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.229630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.230018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.230048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.230421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.230449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.230833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.230861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.231202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.231232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.231597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.231625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.231877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.231918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.232146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.232177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.232546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.232575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.232937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.232968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.233253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.233281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.233632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.233661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.234026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.234057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.234413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.234441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.234847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.234876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.235149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.235179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.235418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.235448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.235831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.235861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.236207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.236236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.236589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.236619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.236984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.237014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.237407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.237436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.237802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.237831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.238095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.238124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.238478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.238507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.238857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.238886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.239170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.239198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.239441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.239472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.239830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.239859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.240118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.240150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.240528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.240557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.240939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.240969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.241353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.241384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.241746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.241774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.241999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.242036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.242402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.242431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.242667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.242698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.243066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.243096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.243457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.243486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.243863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.243891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.244192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.244220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.244562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.244590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.245011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.245041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.245414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.245451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.245784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.245813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.246158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.246197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.246543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.246572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.246921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.246951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.247304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.247333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.247638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.247667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.248044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.248074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.248324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.248354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.248593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.248624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.249012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.249041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.249281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.249310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.249673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.249702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.249942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.249995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.250374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.250402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.250757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.250786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-09-30 23:02:16.251159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-09-30 23:02:16.251191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.251560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.251588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.251961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.251992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.252353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.252383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.252751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.252780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.253183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.253213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.253569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.253597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.253855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.253883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.254184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.254212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.254572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.254601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.254975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.255005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.255356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.255384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.255745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.255773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.256149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.256179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.256513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.256542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-09-30 23:02:16.256883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-09-30 23:02:16.256931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.257278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.257310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.257697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.257726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.258085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.258118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.258480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.258509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.258729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.258758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.259131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.259161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.259475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.259503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.259871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.259909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.260184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.260212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.260563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.260591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.260955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.260985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.261253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.261282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.261612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.261641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.261921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.261952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.262208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.262236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.262642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.262670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.262930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.262960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.263326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.263354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.263727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.263755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.264122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.264151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.264505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.522 [2024-09-30 23:02:16.264533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.522 qpair failed and we were unable to recover it. 00:33:49.522 [2024-09-30 23:02:16.264907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.264937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.265293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.265322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.265573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.265601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.265855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.265883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.266244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.266273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.266644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.266673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.267058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.267089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.267497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.267525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.267884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.267925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.268301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.268330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.268680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.268708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.269081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.269111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.269344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.269373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.269720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.269749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.270098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.270128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.270492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.270520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.270881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.270923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.271250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.271278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.271648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.271682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.272013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.272043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.272416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.272445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.272858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.272886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.273252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.273281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.273647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.273677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.274025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.274054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.274282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.274313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.274698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.274728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.274988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.275017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.275386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.275416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.275769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.275797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.276152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.276182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.276539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.276568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.276786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.276817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.277054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.277084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.277452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.277480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.277836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.277865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.278256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.278286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.278651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.523 [2024-09-30 23:02:16.278679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.523 qpair failed and we were unable to recover it. 00:33:49.523 [2024-09-30 23:02:16.279052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.279081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.279444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.279473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.279718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.279747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.280083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.280113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.280477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.280506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.280911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.280940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.281171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.281203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.281464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.281495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.281881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.281920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.282213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.282241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.282503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.282532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.282873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.282925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.283294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.283322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.283673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.283701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.284062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.284093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.284401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.284430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.284803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.284831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.285190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.285220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.285581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.285610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.285970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.286000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.286362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.286397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.286756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.286784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.287154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.287184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.287420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.287451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.287812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.287840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.288210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.288240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.288600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.288629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.288984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.289015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.289390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.289419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.289655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.289682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.290030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.290059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.290418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.290448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.290702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.290731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.291104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.291135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.291366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.291397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.291764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.291793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.292060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.292090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.292433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.524 [2024-09-30 23:02:16.292462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.524 qpair failed and we were unable to recover it. 00:33:49.524 [2024-09-30 23:02:16.292818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.292848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.293209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.293240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.293605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.293635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.293970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.294001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.294378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.294407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.294764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.294793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.295082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.295113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.295480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.295510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.295883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.295923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.296281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.296310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.296706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.296736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.297100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.297130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.297489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.297519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.297875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.297917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.298268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.298297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.298654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.298683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.299046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.299077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.299461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.299490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.299854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.299884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.300171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.300201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.300454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.300483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.300865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.300904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.301260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.301296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.301641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.301670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.302022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.302053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.302432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.302461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.302699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.302731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.303121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.303152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.303510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.303540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.303915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.303945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.304307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.304336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.304694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.304723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.305091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.305121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.305481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.305511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.305854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.305882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.306261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.306289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.306542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.306574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.306918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.525 [2024-09-30 23:02:16.306948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.525 qpair failed and we were unable to recover it. 00:33:49.525 [2024-09-30 23:02:16.307293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.307322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.307687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.307715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.308056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.308086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.308446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.308475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.308833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.308862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.309100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.309132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.309510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.309539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.309914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.309944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.310342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.310371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.310713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.310742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.311116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.311146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.311543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.311572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.311939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.311969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.312348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.312376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.312737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.312765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.313131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.313160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.313520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.313549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.313917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.313946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.314173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.314203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.314429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.314459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.314809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.314838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.315201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.315232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.315598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.315625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.315970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.316000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.316366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.316401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.316774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.316802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.317165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.317195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.317561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.317591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.317944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.317973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.318357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.318386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.318740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.318769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.319212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.319242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.319580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.319609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.526 [2024-09-30 23:02:16.319954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.526 [2024-09-30 23:02:16.319984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.526 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.320351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.320379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.320745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.320773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.321149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.321178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.321550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.321578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.321948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.321978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.322277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.322305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.322740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.322769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.323018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.323048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.323443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.323471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.323829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.323858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.324233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.324263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.324599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.324627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.324864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.324906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.325278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.325308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.325634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.325663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.326013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.326043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.326404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.326432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.326799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.326829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.327083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.327116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.327488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.327517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.327878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.327919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.328279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.328307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.328675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.328706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.329058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.329087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.329314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.329345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.329698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.329729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.330154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.330185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.330522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.330551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.330932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.330962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.331309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.331338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.331696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.331731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.332112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.332143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.332498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.332527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.332759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.332789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.333137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.333167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.333530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.333558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.333820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.333848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.527 qpair failed and we were unable to recover it. 00:33:49.527 [2024-09-30 23:02:16.334238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.527 [2024-09-30 23:02:16.334268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.334598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.334627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.335008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.335038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.335408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.335437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.335804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.335832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.336195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.336224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.336588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.336617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.337016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.337047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.337407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.337435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.337825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.337853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.338216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.338247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.338610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.338639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.338881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.338940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.339172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.339203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.339560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.339588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.339975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.340006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.340382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.340411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.340631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.340662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.340814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.340843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.341220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.341251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.341627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.341657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.342035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.342065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.342430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.342460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.342666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.342696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.342954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.342984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.343336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.343365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.343734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.343762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.344108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.344139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.344495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.344524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.344776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.344804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.345161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.345190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.345561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.345588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.345938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.345967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.346231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.346265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.346621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.346648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.347021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.347049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.347415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.347444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.347810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.347837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.348303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.528 [2024-09-30 23:02:16.348332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.528 qpair failed and we were unable to recover it. 00:33:49.528 [2024-09-30 23:02:16.348687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.348716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.349079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.349109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.349472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.349501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.349760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.349793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.350151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.350183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.350415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.350445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.350807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.350839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.351089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.351121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.351504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.351535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.351929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.351962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.352315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.352346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.352716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.352746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.353121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.353152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.353508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.353540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.353906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.353938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.354322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.354352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.354724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.354754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.355121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.355151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.355510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.355540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.355910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.355942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.356323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.356354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.356719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.356749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.357102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.357133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.357490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.357520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.357772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.357801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.358167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.358204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.358435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.358466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.358824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.358854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.359211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.359250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.359606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.359635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.360038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.360068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.360326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.360355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.360705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.360743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.361076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.361106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.361478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.361513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.361880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.361920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.362293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.362322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.362696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.529 [2024-09-30 23:02:16.362725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.529 qpair failed and we were unable to recover it. 00:33:49.529 [2024-09-30 23:02:16.363075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.363104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.363473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.363502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.363877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.363919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.364274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.364302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.364674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.364703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.365121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.365152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.365398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.365430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.365800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.365832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.366202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.366232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.366491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.366522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.366869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.366912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.367298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.367328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.367704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.367733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.368122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.368153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.368566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.368596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.368958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.368991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.369378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.369408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.369781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.369810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.370170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.370200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.370571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.370599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.370968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.370999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.371362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.371392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.371743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.371772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.372002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.372035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.372388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.372420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.372670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.372703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.372940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.372972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.373337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.373366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.373730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.373760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.374131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.374163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.374519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.374549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.374932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.374964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.375317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.375347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.375717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.375748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.376091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.376123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.376487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.376518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.376878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.376930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.377283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.377313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.530 [2024-09-30 23:02:16.377684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.530 [2024-09-30 23:02:16.377713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.530 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.378076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.378107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.378504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.378533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.378917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.378949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.379330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.379361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.379714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.379744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.380147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.380178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.380609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.380638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.381058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.381090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.381458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.381487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.381831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.381876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.382107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.382139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.382528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.382558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.382935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.382965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.383329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.383363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.383722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.383751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.384003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.384036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.384398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.384427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.384784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.384816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.385216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.385247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.385612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.385641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.386017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.386048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.386417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.386446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.386710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.386740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.387104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.387135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.387523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.387553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.387936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.387967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.388339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.388368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.388733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.388763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.389063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.389093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.389472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.389501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.389862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.389890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.390266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.390297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.390660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.390689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.531 qpair failed and we were unable to recover it. 00:33:49.531 [2024-09-30 23:02:16.391055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.531 [2024-09-30 23:02:16.391087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.391460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.391490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.391860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.391890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.392151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.392181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.392434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.392470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.392829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.392858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.393233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.393264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.393590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.393621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.393974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.394005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.394258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.394290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.394654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.394683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.395055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.395085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.395444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.395473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.395701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.395734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.396135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.396168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.396516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.396545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.396920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.396953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.397322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.397354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.397712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.397744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.398145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.398176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.398539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.398577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.398948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.398979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.399322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.399366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.399713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.399744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.400107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.400139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.400475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.400504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.400734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.400764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.400981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.401013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.401378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.401407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.401744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.401774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.402078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.402108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.402474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.402506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.402929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.402967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.403359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.403391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.403756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.403786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.404133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.404164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.404531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.404559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.404934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.404968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.405358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.532 [2024-09-30 23:02:16.405388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.532 qpair failed and we were unable to recover it. 00:33:49.532 [2024-09-30 23:02:16.405754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.405784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.406153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.406184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.406407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.406439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.406807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.406838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.407200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.407233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.407590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.407620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.407978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.408009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.408383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.408412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.408786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.408815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.409192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.409224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.409576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.409604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.409988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.410018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.410394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.410423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.410794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.410823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.411169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.411200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.411568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.411596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.411837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.411868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.412219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.412249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.412614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.412644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.413019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.413051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.413388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.413415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.413753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.413782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.414173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.414202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.414454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.414484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.414859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.414889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.415131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.415162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.415531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.415560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.415926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.415956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.416350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.416380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.416734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.416763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.417117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.417149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.417521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.417550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.417917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.417957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.418302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.418331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.418695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.418723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.419094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.419123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.419514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.419542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.419905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.419935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.533 [2024-09-30 23:02:16.420265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.533 [2024-09-30 23:02:16.420294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.533 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.420676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.420705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.421077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.421108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.421478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.421506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.421860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.421889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.422225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.422256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.422546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.422575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.422936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.422968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.423359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.423391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.423661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.423690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.424049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.424079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.424439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.424467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.424824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.424852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.425213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.425242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.425613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.425642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.425911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.425941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.426354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.426382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.426745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.426774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.427028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.427059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.427437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.427466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.427862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.427891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.428284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.428314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.428674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.428703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.429078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.429108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.429350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.429381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.429745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.429775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.430130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.430160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.430323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.430354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.430724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.430754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.431117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.431149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.431511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.431540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.431910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.431940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.432306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.432334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.432581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.432612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.432979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.433016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.433369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.433397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.433776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.433805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.434137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.434167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.534 [2024-09-30 23:02:16.434540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.534 [2024-09-30 23:02:16.434571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.534 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.434800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.434831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.435189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.435221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.435599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.435629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.435958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.435991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.436347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.436375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.436610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.436639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.437014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.437045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.437388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.437416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.437778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.437806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.438177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.438208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.438579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.438608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.438955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.438985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.439340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.439369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.439726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.439754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.440119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.440148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.440406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.440435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.440806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.440835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.441200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.441231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.441466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.441497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.441874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.441915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.442265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.442293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.442652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.442680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.443049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.443080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.443446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.443475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.443822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.443852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.444133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.444164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.444490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.444518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.444880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.444917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.445162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.445190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.445571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.445600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.445955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.445985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.446367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.446396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.446763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.446792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.447133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.447163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.447538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.535 [2024-09-30 23:02:16.447567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.535 qpair failed and we were unable to recover it. 00:33:49.535 [2024-09-30 23:02:16.447916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.447952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.448289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.448317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.448657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.448686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.449083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.449114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.449477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.449505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.449759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.449788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.449996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.450026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.450453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.450482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.450851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.450880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.451295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.451324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.451672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.451702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.452065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.452097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.452388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.452416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.452810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.452839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.453177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.453209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.453438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.453469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.453843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.453871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.454178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.454207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.454577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.454605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.454961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.454991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.455457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.455485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.455859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.455889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.456172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.456201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.456538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.456566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.456927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.456958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.457356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.457384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.457677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.457705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.458076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.458108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.458475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.458504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.458855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.458883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.459257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.459287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.459613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.459641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.460006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.460036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.460382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.460412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.460741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.460770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.461109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.461140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.536 [2024-09-30 23:02:16.461492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-09-30 23:02:16.461521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.536 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.461885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.461925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.462189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.462218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.462581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.462609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.462981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.463018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.463388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.463416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.463767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.463797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.464164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.464194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.464548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.464577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.464950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.464980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.465343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.465372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.465626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.465654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.465982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.466012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.466343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.466371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.466654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.466682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.467091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.467121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.467496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.467533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.467913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.467943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.468217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.468246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.468614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.468644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.468982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.469012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.469464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.469493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.469759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.469787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.470192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.470221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.470567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.470596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.470966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.470996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.471249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.471277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.471652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.471681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.471880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.471923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.472289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.472318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.472676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.472705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.473053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.473083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.473442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.473471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.473718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.473746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.474041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.474070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.474442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.474471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.474838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.474866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.475175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.475206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.475595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-09-30 23:02:16.475624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.537 qpair failed and we were unable to recover it. 00:33:49.537 [2024-09-30 23:02:16.475987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.476016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.476423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.476452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.476823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.476853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.477244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.477275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.477627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.477657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.478019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.478055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.478325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.478353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.478702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.478731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.479099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.479128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.479491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.479520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.479907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.479937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.480353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.480381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.480746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.480775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.481141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.481171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.481539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.481569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.481927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.481957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.482326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.482355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.482604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.482635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.483005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.483036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.483402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.483431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.483799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.483827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.484222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.484252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.484513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.484541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.484890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.484931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.485161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.485192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.485559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.485588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.485966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.485997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.486241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.486275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.486621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.486652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.487017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.487047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.487409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.487437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.487678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.487706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.487991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.488021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.488375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.488403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.488766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.488795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.489177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.489206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.489557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.489586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.489949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.489980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.538 [2024-09-30 23:02:16.490345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-09-30 23:02:16.490373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.538 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.490747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.490776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.491152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.491182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.491534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.491562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.491797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.491825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.492204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.492234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.492572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.492601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.492971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.493006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.493250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.493280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.493628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.493657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.494028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.494059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.494500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.494528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.494888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.494929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.495275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.495304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.495667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.495696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.496087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.496118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.496496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.496524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.496892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.496939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.497273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.497301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.497671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.497701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.498064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.498094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.498456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.498485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.498847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.498876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.499258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.499287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.499646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.499674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.500021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.500052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.500317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.500345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.500702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.500731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.501092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.501122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.501362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.501392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.501616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.501646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.502000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.502031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.502411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.502440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.502799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.502828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.503094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.503127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.503502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.503531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.503907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.503937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.504289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.539 [2024-09-30 23:02:16.504317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.539 qpair failed and we were unable to recover it. 00:33:49.539 [2024-09-30 23:02:16.504672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.504701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.505065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.505095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.505351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.505378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.505632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.505660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.505942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.505972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.506330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.506359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.506733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.506763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.507115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.507146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.507509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.507537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.507917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.507953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.508376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.508404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.508762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.508791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.509026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.509057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.509448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.509478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.509835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.509863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.510258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.510288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.510514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.510545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.510951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.510980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.511353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.511381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.511759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.511788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.512052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.512080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.512473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.512502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.512862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.512891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.513277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.513307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.513556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.513586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.513975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.514006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.514246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.514275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.514524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.514553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.514884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.514940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.515300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.515330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.515694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.515723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.516088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.516119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.516372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.516401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.540 [2024-09-30 23:02:16.516739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.540 [2024-09-30 23:02:16.516768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.540 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.517141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.517171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.517535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.517564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.517996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.518026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.518389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.518419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.518801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.518831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.519095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.519126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.519356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.519387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.519769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.519799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.520140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.520170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.520540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.520570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.520938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.520968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.521250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.521278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.521648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.521676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.522018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.522048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.522423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.522452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.522827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.522863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.523250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.523280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.523646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.523675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.524036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.524067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.524467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.524495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.524788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.524817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.525188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.525218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.525626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.525654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.526023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.526066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.541 [2024-09-30 23:02:16.526411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.541 [2024-09-30 23:02:16.526440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.541 qpair failed and we were unable to recover it. 00:33:49.848 [2024-09-30 23:02:16.526802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.848 [2024-09-30 23:02:16.526833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.848 qpair failed and we were unable to recover it. 00:33:49.848 [2024-09-30 23:02:16.527277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.848 [2024-09-30 23:02:16.527309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.848 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.527740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.527769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.528090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.528121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.528487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.528515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.528887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.528928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.529279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.529308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.529658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.529687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.530048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.530078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.530439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.530468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.530831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.530858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.531228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.531258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.531632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.531662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.532039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.532069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.532443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.532472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.532809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.532838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.533210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.533239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.533424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.533455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.533832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.533862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.534258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.534287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.534662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.534691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.534929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.534962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.535339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.535369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.535731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.535759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.536188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.536219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.536564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.536593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.536954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.536983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.537263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.537290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.537628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.537656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.538053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.538083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.538452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.538488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.538930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.538961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.539303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.539333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.539693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.539721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.540089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.540120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.540478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.540506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.540840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.540869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.541252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.541283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.849 [2024-09-30 23:02:16.541649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.849 [2024-09-30 23:02:16.541678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.849 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.542030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.542059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.542341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.542369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.542731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.542760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.543179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.543209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.543560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.543590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.543937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.543969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.544293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.544321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.544689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.544718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.545115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.545146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.545512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.545541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.545908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.545938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.546259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.546288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.546634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.546662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.547085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.547115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.547374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.547402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.547760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.547789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.548134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.548165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.548544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.548572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.548933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.548964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.549319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.549347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.549589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.549620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.549868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.549909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.550293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.550323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.550684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.550712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.551080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.551109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.551470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.551498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.551758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.551786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.552117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.552147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.552508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.552537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.552785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.552815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.553173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.553204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.553561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.553596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.553963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.553993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.554255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.554283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.554636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.554665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.555029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.555058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.555430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.555815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.850 [2024-09-30 23:02:16.555843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.850 qpair failed and we were unable to recover it. 00:33:49.850 [2024-09-30 23:02:16.556219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.556249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.556615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.556643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.557009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.557039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.557406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.557434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.557811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.557838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.558106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.558136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.558536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.558567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.558926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.558957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.559321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.559350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.559716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.559744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.560128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.560156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.560541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.560571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.560926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.560957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.561197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.561226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.561594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.561623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.561859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.561890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.562174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.562204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.562555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.562584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.562946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.562977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.563238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.563266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.563550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.563580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.563934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.563964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.564348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.564376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.564724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.564753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.565032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.565063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.565484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.565513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.565854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.565883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.566020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.566052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.566310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.566339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.566715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.566743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.567122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.567152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.567471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.567500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.567843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.567872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.568245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.568282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.568630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.568661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.569035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.569066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.569429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.569458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.569825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.569854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.851 [2024-09-30 23:02:16.570220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.851 [2024-09-30 23:02:16.570251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.851 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.570617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.570647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.570915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.570944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.571312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.571340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.571574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.571604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.571973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.572003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.572367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.572396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.572754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.572784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.573146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.573177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.573526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.573556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.573982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.574011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.574372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.574402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.574757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.574786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.575126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.575155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.575507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.575537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.575925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.575956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.576346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.576373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.576736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.576765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.577115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.577147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.577508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.577537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.577828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.577856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.578302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.578333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.578699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.578730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.579098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.579128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.579477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.579506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.579757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.579785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.580169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.580199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.580571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.580600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.580879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.580921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.581272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.581300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.581532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.581559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.581931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.581961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.582225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.582256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.582510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.582539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.582773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.582802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.583260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.583297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.583690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.583720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.583961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.583993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.584361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.852 [2024-09-30 23:02:16.584391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.852 qpair failed and we were unable to recover it. 00:33:49.852 [2024-09-30 23:02:16.584744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.584774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.585126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.585155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.585546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.585576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.585820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.585848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.586105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.586136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.586482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.586511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.586874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.586916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.587165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.587193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.587550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.587580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.587949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.587979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.588348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.588377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.588722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.588751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.589131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.589161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.589518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.589547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.589860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.589922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.590254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.590283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.590631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.590661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.591011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.591042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.591474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.591502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.591860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.591889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.592306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.592335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.592595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.592626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.592979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.593010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.593378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.593409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.593769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.593798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.594141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.594171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.594530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.594559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.594925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.594956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.595340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.595369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.595710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.595739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.596115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.596144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.596514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.596543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.596910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.853 [2024-09-30 23:02:16.596941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.853 qpair failed and we were unable to recover it. 00:33:49.853 [2024-09-30 23:02:16.597168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.597197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.597545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.597574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.597954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.597984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.598344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.598381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.598727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.598756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.599065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.599094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.599510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.599538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.599915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.599944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.600290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.600319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.600725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.600753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.601138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.601168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.601537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.601565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.601917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.601948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.602310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.602338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.602711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.602739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.603078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.603108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.603468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.603497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.603853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.603883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.604248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.604277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.604545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.604572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.604930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.604960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.605246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.605274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.605649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.605679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.606051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.606081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.606449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.606477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.606836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.606865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.607256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.607286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.607652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.607681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.608047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.608079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.608458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.608487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.608844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.608884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.609284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.609314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.609723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.609752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.610099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.610129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.610515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.610545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.610912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.610943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.611291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.611318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.611685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.854 [2024-09-30 23:02:16.611714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.854 qpair failed and we were unable to recover it. 00:33:49.854 [2024-09-30 23:02:16.611955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.611987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.612242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.612271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.612529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.612559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.612919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.612949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.613282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.613310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.613681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.613710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.614080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.614110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.614469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.614499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.614872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.614911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.615276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.615305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.615671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.615700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.616065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.616095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.616462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.616491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.616776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.616803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.617156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.617187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.617537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.617566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.617940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.617969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.618343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.618372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.618626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.618653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.619003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.619033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.619405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.619434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.619795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.619824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.620070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.620104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.620449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.620478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.620854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.620883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.621253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.621282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.621652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.621681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.621929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.621959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.622238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.622266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.622606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.622633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.622862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.622890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.623245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.623273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.623634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.623669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.623816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.623856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.624220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.624250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.624626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.624654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.624931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.624962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.625323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.625353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.855 [2024-09-30 23:02:16.625717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.855 [2024-09-30 23:02:16.625745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.855 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.626105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.626135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.626478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.626507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.626881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.626921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.627266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.627295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.627661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.627689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.627926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.627957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.628251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.628280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.628721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.628751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.629128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.629157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.629519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.629547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.629929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.629960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.630367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.630398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.630741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.630771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.631125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.631155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.631402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.631430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.631780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.631808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.632148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.632178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.632516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.632545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.632914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.632945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.633304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.633332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.633697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.633725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.633991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.634021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.634312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.634341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.634698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.634728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.635075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.635104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.635469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.635497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.635857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.635886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.636230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.636259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.636623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.636652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.637031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.637060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.637446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.637475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.637837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.637866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.638264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.638294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.638692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.638726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.639157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.639187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.639587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.639616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.639983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.640012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.640442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.856 [2024-09-30 23:02:16.640471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.856 qpair failed and we were unable to recover it. 00:33:49.856 [2024-09-30 23:02:16.640835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.640863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.641246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.641276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.641655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.641683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.642050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.642080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.642437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.642465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.642825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.642853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.643220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.643250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.643484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.643514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.643771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.643799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.644179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.644211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.644570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.644598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.644966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.644995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.645252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.645281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.645578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.645606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.645959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.645990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.646338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.646368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.646728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.646757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.647127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.647156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.647513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.647541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.647909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.647939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.648348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.648377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.648745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.648774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.649116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.649147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.649518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.649546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.649922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.649952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.650320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.650348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.650680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.650719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.651084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.651114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.651473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.651502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.651870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.651911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.652263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.652291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.652690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.652719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.653094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.653124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.653375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.653406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.653779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.653806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.857 qpair failed and we were unable to recover it. 00:33:49.857 [2024-09-30 23:02:16.654177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.857 [2024-09-30 23:02:16.654213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.654573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.654602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.654967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.654996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.655379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.655408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.655655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.655687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.656032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.656063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.656414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.656443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.656683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.656715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.657070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.657103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.657448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.657477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.657840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.657869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.659758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.659826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.660306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.660345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.660715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.660746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.660996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.661027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.661380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.661409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.661656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.661690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.662045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.662076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.662328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.662357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.662725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.662755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.663120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.663151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.663500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.663529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.663769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.663802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.664156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.664187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.664552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.664581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.664950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.664982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.665332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.665364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.665725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.665757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.666121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.666151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.666580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.666608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.666973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.667004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.667356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.667385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.667752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.667781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.668144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.668174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.668533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.668561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.668919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.668953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.669318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.669348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.671012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.858 [2024-09-30 23:02:16.671077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.858 qpair failed and we were unable to recover it. 00:33:49.858 [2024-09-30 23:02:16.671447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.671482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.671716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.671748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.672124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.672165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.672404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.672434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.672792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.672821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.673194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.673226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.673584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.673614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.673970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.674003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.674349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.674381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.674723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.674753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.675119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.675150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.675392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.675421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.675768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.675798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.676152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.676183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.676585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.676615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.676965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.676996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.678690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.678751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.679183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.679221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.679599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.679630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.679990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.680022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.680391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.680419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.680794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.680824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.681186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.681219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.681513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.681541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.681913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.681945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.682300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.682330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.682684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.682715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.683081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.683111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.683490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.683519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.683861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.683941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.685646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.685701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.686094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.686127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.687815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.687868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.688241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.688273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.688634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.688663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.689025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.689057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.689448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.689477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.859 [2024-09-30 23:02:16.689839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.859 [2024-09-30 23:02:16.689868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.859 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.690145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.690175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.690537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.690567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.690927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.690958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.691349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.691378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.691751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.691790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.692014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.692049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.692408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.692438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.692684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.692716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.692982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.693014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.693256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.693288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.693561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.693594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.693950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.693980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.694352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.694381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.694813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.694843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.695037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.695066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.695445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.695474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.695844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.695875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.696329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.696360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.696699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.696731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.697077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.697112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.697475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.697508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.697752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.697783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.698152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.698184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.698417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.698450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.698821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.698853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.699137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.699168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.699552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.699583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.699942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.699974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.700322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.700352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.700694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.700723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.701015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.701046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.701435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.701465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.701719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.701748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.702029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.702060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.702430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.702460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.702827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.702856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.703169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.860 [2024-09-30 23:02:16.703200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.860 qpair failed and we were unable to recover it. 00:33:49.860 [2024-09-30 23:02:16.703581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.703611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.703987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.704018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.704386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.704416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.704789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.704821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.705096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.705128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.705414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.705444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.705791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.705820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.706182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.706218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.706580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.706610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.706866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.706908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.707368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.707400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.707752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.707784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.708164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.708198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.708469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.708499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.708851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.708881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.709293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.709324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.709695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.709728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.710077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.710107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.710468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.710498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.710858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.710889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.711273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.711303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.711562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.711592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.711880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.711941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.712210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.712239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.712603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.712633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.712918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.712950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.713346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.713376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.713614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.713646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.713984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.714015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.714372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.714411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.714750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.714780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.715229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.715260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.715609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.715639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.715919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.715949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.716386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.716416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.716845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.716875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.717154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.717184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.717528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.717557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.861 [2024-09-30 23:02:16.717821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.861 [2024-09-30 23:02:16.717850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.861 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.718188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.718218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.718587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.718617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.718980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.719012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.719397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.719426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.719857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.719886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.720264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.720293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.720564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.720594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.720836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.720867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.721238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.721273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.721639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.721670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.721969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.721999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.722246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.722277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.722650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.722681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.722983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.723015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.723457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.723488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.723724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.723753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.724051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.724082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.724429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.724458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.724709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.724739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.725148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.725179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.725595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.725624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.725987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.726017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.726396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.726427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.726673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.726704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.726989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.727021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.727460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.727489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.727871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.727921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.728351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.728381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.728599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.728631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.728981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.729013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.729241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.729273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.729651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.729681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.730052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.730084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.862 [2024-09-30 23:02:16.730447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.862 [2024-09-30 23:02:16.730477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.862 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.730845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.730877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.731317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.731349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.731723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.731754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.732108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.732139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.732519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.732549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.732945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.732977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.733248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.733278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.733667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.733698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.734078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.734109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.734471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.734500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.734846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.734877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.735275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.735305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.735661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.735690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.735935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.735968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.736212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.736250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.736589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.736621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.736974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.737006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.737352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.737381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.737857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.737888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.738251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.738281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.738640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.738671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.739019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.739049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.739430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.739461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.739751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.739780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.740037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.740071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.740436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.740465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.740825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.740854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.741210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.741241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.741606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.741635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.741831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.741861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.742313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.742344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.742753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.742783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.743137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.743168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.743552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.743580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.743926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.743956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.744354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.744384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.744745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.744774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.745180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.863 [2024-09-30 23:02:16.745210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.863 qpair failed and we were unable to recover it. 00:33:49.863 [2024-09-30 23:02:16.745563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.745592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.745967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.745999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.746376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.746405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.746668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.746699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.747154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.747184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.747597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.747626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.747976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.748006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.748390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.748418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.748763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.748792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.749186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.749217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.749375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.749404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.749750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.749780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.749945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.749977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.750233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.750264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.750635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.750664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.751011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.751040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.751435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.751470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.751802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.751838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.752185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.752214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.752607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.752637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.752864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.752892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.753256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.753286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.753655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.753683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.754134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.754164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.754403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.754432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.754806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.754834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.755293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.755323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.755676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.755705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.756072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.756101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.756441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.756470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.756837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.756866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.757287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.757317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.757667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.757704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.758044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.758075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.758422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.758452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.758840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.758868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.759233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.759263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.759514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.759541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.864 qpair failed and we were unable to recover it. 00:33:49.864 [2024-09-30 23:02:16.759913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.864 [2024-09-30 23:02:16.759945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.760329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.760358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.760710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.760738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.761110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.761140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.761383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.761413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.761793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.761823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.762212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.762244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.764027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.764082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.764480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.764512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.764877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.764933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.765295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.765325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.765702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.765732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.766076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.766108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.766348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.766378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.766743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.766772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.767129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.767161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.767516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.767544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.767917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.767947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.768310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.768347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.768575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.768604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.768950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.768979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.769313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.769342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.769707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.769736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.770000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.770030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.770266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.770298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.770629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.770658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.771021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.771052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.771407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.771437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.771764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.771793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.772213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.772244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.772675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.772704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.773067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.773097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.773463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.773492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.773860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.773889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.774248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.774278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.774646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.774675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.775036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.775065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.775427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.775456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.775808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.865 [2024-09-30 23:02:16.775837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.865 qpair failed and we were unable to recover it. 00:33:49.865 [2024-09-30 23:02:16.776182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.776212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.776575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.776603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.776861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.776891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.777304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.777335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.777685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.777716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.778117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.778146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.778373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.778405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.778661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.778691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.778975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.779005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.779391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.779420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.779757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.779785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.780133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.780162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.780416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.780446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.780812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.780843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.781245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.781276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.781624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.781653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.782058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.782088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.782442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.782471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.782843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.782872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.783259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.783296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.783639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.783670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.784051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.784081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.784454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.784482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.784849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.784879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.785283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.785314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.785680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.785709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.786080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.786110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.786484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.786513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.786882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.786922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.787294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.787322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.787694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.787723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.788104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.788135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.788545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.788574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.788943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.788973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.789321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.789349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.789635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.789666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.866 [2024-09-30 23:02:16.790022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.866 [2024-09-30 23:02:16.790053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.866 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.790435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.790465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.790704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.790732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.791159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.791188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.791529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.791558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.791924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.791956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.792309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.792338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.792589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.792619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.792953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.792983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.793353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.793381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.793740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.793770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.794149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.794179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.794521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.794550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.794922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.794953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.795311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.795341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.795589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.795618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.795958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.795989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.796361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.796390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.796749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.796778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.797146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.797176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.797553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.797581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.797952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.797982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.798289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.798317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.798695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.798731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.799126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.799156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.799504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.799534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.799883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.799926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.800297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.800326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.800694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.800724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.801073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.801103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.801461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.801489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.801862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.801891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.802290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.802319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.802693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.802721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.803077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.803106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.803467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.803496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.867 qpair failed and we were unable to recover it. 00:33:49.867 [2024-09-30 23:02:16.803862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.867 [2024-09-30 23:02:16.803892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.804280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.804310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.804677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.804707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.805069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.805100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.805464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.805495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.805851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.805882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.806240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.806270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.806632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.806661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.807009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.807039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.807398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.807427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.807793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.807821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.808227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.808257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.808613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.808642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.808983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.809015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.809371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.809406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.809764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.809793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.810160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.810191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.810575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.810604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.810978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.811007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.811402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.811431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.811671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.811702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.812068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.812099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.812432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.812462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.812825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.812854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.813226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.813257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.813518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.813546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.813911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.813942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.814333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.814362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.814713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.814743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.815035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.815066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.815432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.815461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.815828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.815856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.816243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.816274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.816674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.816703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.817129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.817159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.817516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.817546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.817914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.817944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.818292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.818323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.818676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.868 [2024-09-30 23:02:16.818705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.868 qpair failed and we were unable to recover it. 00:33:49.868 [2024-09-30 23:02:16.819066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.819097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.819472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.819501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.819857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.819887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.820262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.820292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.820650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.820678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.821049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.821080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.821446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.821475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.821833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.821861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.822286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.822316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.822563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.822592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.822951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.822981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.823360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.823389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.823734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.823763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.824125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.824156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.824387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.824416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.824796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.824832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.825202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.825232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.825587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.825617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.826016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.826046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.826448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.826476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.826809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.826838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.827201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.827231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.827635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.827664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.828036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.828066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.828425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.828454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.828821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.828850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.829251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.829281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.829657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.829687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.830057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.830089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.830432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.830461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.830637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.830668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.831043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.831075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.831421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.831449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.831811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.831840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.832217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.832247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.832644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.832673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.833046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.869 [2024-09-30 23:02:16.833076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.869 qpair failed and we were unable to recover it. 00:33:49.869 [2024-09-30 23:02:16.833431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.833461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.833823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.833853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.834231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.834262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.834648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.834676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.835018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.835049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.835425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.835455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.835818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.835849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.836236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.836267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.836613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.836641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.837015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.837045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.837389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.837419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.837788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.837818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.838176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.838206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.838463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.838495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.838714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.838742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.839105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.839136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.839515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.839545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.839925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.839956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.840307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.840349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.840669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.840698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.841009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.841039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.841406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.841435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.841744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.841774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.842132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.842161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.842523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.842553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.842923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.842955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.843295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.843325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.843689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.843717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.844096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.844126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.844485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.844514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.844908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.844939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.845345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.845374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.845822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.845852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.846246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.846277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.846638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.846667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.846933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.846968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.847328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.847357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.847716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.847744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.870 qpair failed and we were unable to recover it. 00:33:49.870 [2024-09-30 23:02:16.848077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.870 [2024-09-30 23:02:16.848107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-09-30 23:02:16.848472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-09-30 23:02:16.848501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-09-30 23:02:16.848875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-09-30 23:02:16.848916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-09-30 23:02:16.849274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-09-30 23:02:16.849303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:50.145 [2024-09-30 23:02:16.851692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.145 [2024-09-30 23:02:16.851763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.145 qpair failed and we were unable to recover it. 00:33:50.145 [2024-09-30 23:02:16.852162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.145 [2024-09-30 23:02:16.852198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.145 qpair failed and we were unable to recover it. 00:33:50.145 [2024-09-30 23:02:16.852568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.852598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.852962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.852992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.853424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.853453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.853828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.853857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.854303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.854334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.854685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.854715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.855065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.855095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.855465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.855494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.855864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.855892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.856265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.856293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.856664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.856694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.857059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.857089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.857425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.857455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.857824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.857852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.858254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.858292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.858656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.858686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.858943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.858976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.859319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.859350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.859712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.859740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.860141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.860171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.860524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.860552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.860917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.860948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.861303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.861332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.861698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.861727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.861974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.862004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.862375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.862404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.862843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.862872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.863168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.863196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.863580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.863609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.146 qpair failed and we were unable to recover it. 00:33:50.146 [2024-09-30 23:02:16.863966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.146 [2024-09-30 23:02:16.863997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.864356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.864385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.864714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.864744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.865119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.865151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.865522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.865551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.865919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.865949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.866309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.866338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.866682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.866712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.867082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.867112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.867421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.867451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.867813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.867841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.868199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.868229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.868461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.868490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.868860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.868888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.869134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.869164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.869515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.869544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.869834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.869863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.870281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.870311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.870676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.870704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.871062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.871093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.871495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.871524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.871884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.871923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.872326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.872355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.872720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.872748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.873150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.873180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.873551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.873587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.873952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.873982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.874366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.874395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.874756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.874785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.875033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.875064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.875426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.875455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.147 [2024-09-30 23:02:16.875808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.147 [2024-09-30 23:02:16.875839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.147 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.876208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.876239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.876601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.876630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.877012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.877041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.877387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.877416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.877775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.877804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.878177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.878207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.878546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.878575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.878940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.878973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.879353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.879381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.879739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.879768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.880137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.880167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.880541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.880571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.880932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.880962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.881350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.881378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.881750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.881780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.882115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.882145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.882391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.882421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.882665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.882697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.883075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.883106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.883474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.883503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.883855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.883884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.884263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.884292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.884654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.884684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.885048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.885079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.885448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.885477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.885848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.885877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.886268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.886299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.886666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.886694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.887054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.887085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.148 qpair failed and we were unable to recover it. 00:33:50.148 [2024-09-30 23:02:16.887375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.148 [2024-09-30 23:02:16.887403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.887765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.887794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.888153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.888184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.888570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.888598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.888954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.888991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.889362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.889390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.889734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.889763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.890161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.890191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.890557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.890584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.890947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.890978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.891348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.891377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.891735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.891764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.892127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.892157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.892514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.892544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.892918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.892948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.893289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.893318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.893684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.893713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.894068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.894097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.894467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.894497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.894854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.894884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.895275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.895304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.895717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.895746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.896077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.896109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.896471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.896501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.896861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.896890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.897246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.897276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.897502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.897532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.897887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.897929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.898256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.898284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.149 [2024-09-30 23:02:16.898652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.149 [2024-09-30 23:02:16.898681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.149 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.899056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.899085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.899431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.899461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.899816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.899846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.900227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.900257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.900625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.900653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.901014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.901043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.901414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.901442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.901809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.901838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.902214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.902243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.902609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.902637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.903006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.903036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.903394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.903424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.903785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.903815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.904200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.904231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.904549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.904582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.904934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.904964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.905302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.905331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.905703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.905735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.906091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.906135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.906538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.906585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.906978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.907032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.907451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.907507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.907928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.907978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.908415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.908462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.908682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.908712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.909075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.909106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.909449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.909478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.909813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.909842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.910335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.910366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.910708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.910736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.911115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.911146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.150 [2024-09-30 23:02:16.911527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.150 [2024-09-30 23:02:16.911557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.150 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.911916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.911946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.912374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.912403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.912762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.912792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.913139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.913170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.913522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.913552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.913814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.913844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.914187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.914217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.914608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.914638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.914959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.914990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.915261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.915292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.915530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.915560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.915919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.915950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.916253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.916283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.916636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.916667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.917161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.917192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.917570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.917599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.917939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.917971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.918368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.918400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.918752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.918781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.919173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.919208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.919558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.919587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.919838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.919870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.920303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.920341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.920718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.151 [2024-09-30 23:02:16.920748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.151 qpair failed and we were unable to recover it. 00:33:50.151 [2024-09-30 23:02:16.921128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.921161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.921454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.921491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.921842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.921874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.922247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.922278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.922723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.922752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.923132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.923162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.923526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.923556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.923926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.923957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.924341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.924371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.924740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.924772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.925129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.925159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.925560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.925589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.925989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.926019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.926385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.926414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.926786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.926817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.927210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.927241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.927531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.927561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.927824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.927855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.928104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.928138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.928477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.928507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.928865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.928909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.929165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.929195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.929560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.929589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.929954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.929985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.930378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.930408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.930668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.930698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.931015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.931046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.931301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.931330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.152 [2024-09-30 23:02:16.931702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.152 [2024-09-30 23:02:16.931731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.152 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.932079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.932110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.932366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.932397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.932640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.932672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.933050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.933082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.933417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.933446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.933804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.933835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.934243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.934274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.934567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.934595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.934822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.934852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.935124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.935161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.935403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.935435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.935775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.935806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.936160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.936192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.936546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.936576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.936959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.936990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.937362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.937392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.937764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.937793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.938207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.938239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.938627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.938659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.939077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.939109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.939360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.939390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.939741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.939772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.940186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.940218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.940578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.940610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.940985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.941017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.941465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.941495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.941840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.941870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.942271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.942301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.942700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.942730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.943194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.943224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.943570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.943608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.943847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.943877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.153 [2024-09-30 23:02:16.944190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.153 [2024-09-30 23:02:16.944221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.153 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.944664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.944694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.944960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.944990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.945359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.945389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.945753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.945785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.946161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.946190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.946552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.946583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.946952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.946985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.947364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.947394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.947765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.947795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.948232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.948262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.948594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.948623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.948879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.948922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.949351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.949381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.949634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.949665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.949960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.949992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.950386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.950415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.950659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.950693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.951063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.951095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.951424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.951453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.951804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.951834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.952225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.952255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.952624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.952653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.953029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.953059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.953320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.953351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.953705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.953737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.954160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.954191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.954432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.954461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.154 [2024-09-30 23:02:16.954861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.154 [2024-09-30 23:02:16.954891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.154 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.955283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.955315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.955698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.955727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.955998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.956031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.956415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.956446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.956758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.956786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.957147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.957178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.957523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.957552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.957806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.957837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.958292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.958324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.958665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.958694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.959046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.959078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.959370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.959399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.959644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.959674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.959988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.960018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.960367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.960397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.960749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.960779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.961226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.961256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.961605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.961634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.962064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.962096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.962439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.962466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.962743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.962772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.963189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.963221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.963445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.963475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.963849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.963879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.964247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.964278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.964757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.964786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.965153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.965182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.965530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.965558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.965920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.965956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.966428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.966458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.966747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.966775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.967180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.967211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.155 [2024-09-30 23:02:16.967577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.155 [2024-09-30 23:02:16.967606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.155 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.967942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.967978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.968219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.968249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.968595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.968625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.968906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.968939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.969356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.969385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.969740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.969769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.970248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.970278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.970649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.970678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.971157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.971188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.971560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.971590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.971850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.971879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.972324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.972354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.972735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.972763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.973035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.973065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.973420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.973450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.973822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.973852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.974000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.974030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.974293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.974326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.974602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.974630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.974935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.974965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.975353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.975383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.975648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.975676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.975979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.976010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.976419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.976448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.976825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.976853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.977183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.977214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.977591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.156 [2024-09-30 23:02:16.977620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.156 qpair failed and we were unable to recover it. 00:33:50.156 [2024-09-30 23:02:16.977926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.977956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.978310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.978339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.978697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.978725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.979091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.979122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.979495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.979523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.979881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.979921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.980275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.980305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.980667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.980695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.980962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.980999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.981355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.981385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.981617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.981648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.981887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.981928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.982189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.982217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.982582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.982610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.982969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.982999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.983376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.983405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.983866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.983933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.984310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.984339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.984715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.984744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.985094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.985124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.985393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.985421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.985793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.985821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.157 qpair failed and we were unable to recover it. 00:33:50.157 [2024-09-30 23:02:16.986273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.157 [2024-09-30 23:02:16.986303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.986646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.986675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.987077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.987107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.987480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.987509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.987735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.987764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.988022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.988051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.988446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.988475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.988765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.988793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.989148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.989177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.989559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.989588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.990056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.990086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.990485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.990513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.990737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.990767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.991222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.991254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.991656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.991685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.992043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.992074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.992415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.992444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.992805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.992833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.993217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.993247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.993603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.993632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.993936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.993966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.994338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.994366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.994731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.994760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.995034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.995064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.995321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.995350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.995717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.995746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.996163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.996198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.996553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.996582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.158 [2024-09-30 23:02:16.996957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.158 [2024-09-30 23:02:16.996988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.158 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.997308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.997336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.997570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.997599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.997918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.997954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.998365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.998394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.998737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.998767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.999131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.999161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.999517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.999546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:16.999927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:16.999957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.000327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.000357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.000741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.000771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.001133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.001172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.001459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.001488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.001764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.001793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.001968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.001999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.002390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.002419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.002787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.002816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.003192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.003223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.003460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.003491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.003855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.003885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.004253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.004282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.004526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.004558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.004921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.004951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.005315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.005344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.005712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.005740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.006082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.006118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.006473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.006502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.006774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.006802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.007159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.007190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.159 [2024-09-30 23:02:17.007574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.159 [2024-09-30 23:02:17.007603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.159 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.007965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.007995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.008385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.008414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.008777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.008805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.009149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.009179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.009526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.009556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.009759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.009787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.010040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.010070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.010437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.010465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.010826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.010854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.011269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.011299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.011661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.011692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.012043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.012074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.012459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.012489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.012858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.012886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.013269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.013298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.013660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.013690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.014038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.014068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.014432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.014461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.014827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.014855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.015230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.015260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.015628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.015657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.016055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.016088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.016454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.016484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.016768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.016796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.017154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.017184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.017541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.017570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.017805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.017835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.018226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.018256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.160 qpair failed and we were unable to recover it. 00:33:50.160 [2024-09-30 23:02:17.018620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.160 [2024-09-30 23:02:17.018651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.018931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.018961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.019329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.019358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.019712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.019740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.019954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.019987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.020255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.020284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.020537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.020568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.020935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.020973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.021363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.021394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.021758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.021788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.022128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.022158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.022499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.022527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.022785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.022817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.023147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.023177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.023604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.023634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.023934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.023966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.024357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.024385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.024725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.024754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.025119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.025149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.025507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.025536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.025892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.025930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.026340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.026369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.026725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.026754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.027130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.027159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.027529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.027558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.027928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.027959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.028324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.028354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.028712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.028740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.161 [2024-09-30 23:02:17.029000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.161 [2024-09-30 23:02:17.029029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.161 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.029414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.029443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.029701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.029732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.030128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.030158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.030516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.030546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.030904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.030934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.031295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.031324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.031683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.031715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.032080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.032111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.032488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.032517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.032973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.033004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.033379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.033408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.033757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.033786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.034136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.034166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.034529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.034558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.034910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.034941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.035409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.035438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.035808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.035837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.036208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.036239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.036596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.036630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.037072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.037103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.037468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.037496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.037873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.037912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.038248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.038278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.038494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.038525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.038868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.038907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.039248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.039277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.039648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.039678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.040028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.040059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.040408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.040437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.040800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.040828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.162 qpair failed and we were unable to recover it. 00:33:50.162 [2024-09-30 23:02:17.041196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.162 [2024-09-30 23:02:17.041226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.041580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.041610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.041841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.041874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.042199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.042230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.042594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.042624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.042989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.043019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.043372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.043402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.043757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.043786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.044117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.044149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.044505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.044534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.044892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.044935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.045281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.045309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.045653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.045682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.046058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.046089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.046454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.046482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.046825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.046854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.047260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.047290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.047498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.047528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.047759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.047790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.048149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.048180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.048538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.048569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.048932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.048962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.049326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.049354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.049714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.049743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.050075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.050106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.050478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.050507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.050871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.050910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.051265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.051294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.051673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.051709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.052065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.052095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.052438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.052467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.052825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.052854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.053255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.163 [2024-09-30 23:02:17.053284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.163 qpair failed and we were unable to recover it. 00:33:50.163 [2024-09-30 23:02:17.053627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.053657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.054023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.054053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.054427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.054456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.054812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.054839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.055243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.055275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.055616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.055646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.056022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.056054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.056302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.056330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.056694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.056723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.057099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.057130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.057486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.057515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.057841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.057870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.058106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.058138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.058478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.058508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.058915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.058946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.059305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.059334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.059699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.059727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.060087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.060117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.060480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.060509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.060889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.060928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.061280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.061310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.061648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.061676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.062044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.062074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.062448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.062478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.062818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.062847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.063152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.063182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.063442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.063471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.063836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.063864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.064247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.064278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.064639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.064668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.065027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.065058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.065418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.065447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.065841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.065870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.066240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.066269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.066620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.066651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.066865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.066914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.067276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.067306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.067691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.067720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.164 [2024-09-30 23:02:17.068074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.164 [2024-09-30 23:02:17.068105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.164 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.068455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.068484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.068849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.068878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.069130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.069162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.069521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.069550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.069802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.069833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.070211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.070242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.070606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.070634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.070962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.070993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.071364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.071394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.071645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.071674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.072030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.072061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.072422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.072451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.072691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.072722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.073076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.073106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.073494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.073523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.073871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.073912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.074248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.074277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.074635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.074664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.075034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.075064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.075405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.075434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.075796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.075825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.076187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.076217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.076634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.076664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.077036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.077067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.077436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.077464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.077807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.077836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.078205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.078235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.078604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.078633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.078970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.079002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.079363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.079393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.079764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.079793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.080150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.080181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.080542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.080572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.080936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.080967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.081358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.081387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.081737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.081766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.082167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.082203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.082559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.165 [2024-09-30 23:02:17.082589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.165 qpair failed and we were unable to recover it. 00:33:50.165 [2024-09-30 23:02:17.082971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.083003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.083329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.083360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.083738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.083767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.084118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.084148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.084510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.084539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.084909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.084939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.085102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.085133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.085556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.085587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.085933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.085963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.086357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.086387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.086738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.086766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.086954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.086987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.087368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.087397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.087770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.087799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.088185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.088215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.088583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.088612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.088954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.088983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.089355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.089385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.089757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.089787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.090112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.090143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.090513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.090543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.090913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.090944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.091308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.091336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.091717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.091746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.092074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.092104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.092493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.092523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.092957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.092987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.093328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.093357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.093726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.093755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.094119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.094149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.094493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.094521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.094881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.094920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.095316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.095346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.095590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.095618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.095981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.096012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.096362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.096392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.096769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.096797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.097156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.097188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.166 [2024-09-30 23:02:17.097525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.166 [2024-09-30 23:02:17.097560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.166 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.097917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.097948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.098191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.098219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.098461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.098492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.098847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.098877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.099121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.099153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.099499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.099529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.099906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.099938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.100303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.100331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.100699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.100727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.100956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.100988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.101357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.101386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.101753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.101782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.102152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.102183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.102354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.102383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.102620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.102649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.103025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.103056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.103409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.103438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.103821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.103850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.104072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.104103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.104457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.104486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.104862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.104891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.105279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.105309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.105644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.105673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.106036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.106067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.106426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.106455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.106790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.106819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.107185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.107215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.107588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.107617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.107981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.108011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.108357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.108385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.108753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.167 [2024-09-30 23:02:17.108782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.167 qpair failed and we were unable to recover it. 00:33:50.167 [2024-09-30 23:02:17.109044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.109073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.109440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.109469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.109843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.109872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.110145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.110174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.110530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.110559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.110910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.110941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.111178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.111207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.111548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.111577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.111963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.112000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.112245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.112273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.112661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.112690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.113034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.113065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.113437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.113466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.113695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.113724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.114127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.114157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.114529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.114559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.114932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.114962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.115363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.115391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.115757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.115787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.116059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.116089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.116332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.116361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.116718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.116747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.117085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.117115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.117478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.117507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.117883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.117924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.118275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.118304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.168 [2024-09-30 23:02:17.118706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.168 [2024-09-30 23:02:17.118734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.168 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.119080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.119109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.119461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.119489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.119838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.119867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.120242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.120274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.120651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.120680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.120933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.120966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.121337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.121366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.121718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.121747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.122092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.122122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.122485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.122514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.122874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.122926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.123333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.123362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.123726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.123755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.124120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.124150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.124511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.124540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.124916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.124948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.125349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.125378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.125720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.125749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.126089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.126119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.126480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.126509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.126876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.126914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.127256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.127291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.127535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.127567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.127843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.127871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.128266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.128296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.128620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.128648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-09-30 23:02:17.129020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-09-30 23:02:17.129051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.129454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.129483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.129845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.129873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.130068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.130098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.130460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.130488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.130753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.130781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.131146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.131176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.131548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.131578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.131947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.131979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.132324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.132353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.132715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.132744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.133109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.133138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.133506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.133535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.133875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.133924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.134251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.134280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.134638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.134667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.134988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.135018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.135378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.135406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.135816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.135845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.136146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.136175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.136547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.136575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.136949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.136978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.137357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.137386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.137760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.137789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.138137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.138168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.138507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.138535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.138831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.138859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-09-30 23:02:17.139113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-09-30 23:02:17.139146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.139390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.139419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.139770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.139799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.140227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.140257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.140520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.140548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.140772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.140802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.141177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.141208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.141584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.141612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.141971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.142007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.142372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.142400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.142771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.142801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.143143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.143174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.144933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.144993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.145418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.145452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.147251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.147309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.147686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.147721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-09-30 23:02:17.148083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-09-30 23:02:17.148115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.150315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.150384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.150823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.150858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.151292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.151324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.151693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.151722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.152082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.152113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.152491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.152521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.152918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.152947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.153371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.153399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.153779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.153808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.154155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.154188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.154542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.154572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.154947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.154977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.155377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.155405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.155785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.446 [2024-09-30 23:02:17.155814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.446 qpair failed and we were unable to recover it. 00:33:50.446 [2024-09-30 23:02:17.156197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.156232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.156586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.156614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.156771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.156801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.157024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.157055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.157314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.157348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.157598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.157627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.158024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.158054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.158327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.158357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.158596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.158626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.158971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.159003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.159364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.159395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.159640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.159671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.159990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.160024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.160252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.160284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.160439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.160470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.160763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.160794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.161159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.161190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.161455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.161492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.161882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.161922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.162300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.162330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.162593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.162623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.162989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.163022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.163296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.163326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.163704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.163734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.164009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.164041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.164411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.164439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.164812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.164844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.165211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.165241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.165495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.165525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.165888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.165931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.166365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.166394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.166798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.166829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.167188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.167218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.167572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.167603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.167963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.167993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.168229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.168260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.168501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.168531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.168908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.447 [2024-09-30 23:02:17.168940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-09-30 23:02:17.169317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.169346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.169665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.169696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.170044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.170074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.170435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.170465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.170870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.170920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.171214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.171243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.171630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.171660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.171957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.171989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.172396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.172425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.172795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.172828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.173199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.173228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.173590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.173620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.173945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.173974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.174246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.174277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.174644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.174674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.175049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.175079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.175440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.175469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.175808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.175838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.176159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.176191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.176581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.176619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.176964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.176996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.177369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.177398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.177696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.177726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.178140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.178170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.178529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.178559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.178934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.178964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.179365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.179397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.179805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.179837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.180021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.180051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.180340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.180369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-09-30 23:02:17.180723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.448 [2024-09-30 23:02:17.180753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.181081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.181111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.181471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.181501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.181869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.181908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.182171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.182199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.182455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.182484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.182819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.182848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.183253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.183286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.183743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.183772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.184110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.184140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.184479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.184510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.184854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.184884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.185273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.185305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.185669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.185700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.186114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.186146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.186391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.186424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.186787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.186817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.187120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.187150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.187506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.187536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.187766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.187796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.188131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.188161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.188560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.188589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.188963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.188994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.189243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.189274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.189623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.189652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.189892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.189934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.190296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.190336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.190687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.190716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.191072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.191105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.191320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.191357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.191612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.191640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.192015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.192048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.192363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.192393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.192746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.192776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.193128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.193160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.193514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.449 [2024-09-30 23:02:17.193543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.449 qpair failed and we were unable to recover it. 00:33:50.449 [2024-09-30 23:02:17.193916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.193946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.194319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.194348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.194603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.194633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.194979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.195011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.195398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.195429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.195787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.195816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.196071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.196101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.196478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.196509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.196874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.196934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.197333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.197363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.197587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.197618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.197971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.198001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.198256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.198285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.198653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.198682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.199017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.199049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.199448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.199477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.199879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.199918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.200318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.200348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.200711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.200740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.201119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.201151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.201545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.201575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.201767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.201796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.202150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.202179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.202563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.202593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.202990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.203022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.203383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.203412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.203779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.203809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.204192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.204224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.204578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.204608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.450 [2024-09-30 23:02:17.204981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.450 [2024-09-30 23:02:17.205011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.450 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.205293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.205323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.205735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.205764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.206124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.206154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.206540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.206570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.206933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.206964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.207396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.207425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.207800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.207829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.208194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.208224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.208587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.208616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.208997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.209028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.209424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.209453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.209705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.209734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.210094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.210126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.210400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.210429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.210720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.210749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.211104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.211135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.211502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.211530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.211869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.212010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.212235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.212266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.212671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.212699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.213060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.213090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.213460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.213489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.213848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.213877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.214161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.214190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.451 [2024-09-30 23:02:17.214408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.451 [2024-09-30 23:02:17.214445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.451 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.214645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.214674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.215021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.215051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.215386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.215415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.215658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.215689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.215937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.215968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.216326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.216361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.216797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.216826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.217183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.217213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.217571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.217599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.217814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.217845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.218212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.218241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.218606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.218636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.219001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.219032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.219400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.219429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.219683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.219711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.220073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.220103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.220475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.220503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.220756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.220787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.221152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.221182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.221574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.221604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.221962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.221992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.222343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.222373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.222735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.222762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.223189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.223222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.223566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.223597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.452 [2024-09-30 23:02:17.223964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.452 [2024-09-30 23:02:17.223995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.452 qpair failed and we were unable to recover it. 00:33:50.453 [2024-09-30 23:02:17.224376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.453 [2024-09-30 23:02:17.224404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.453 qpair failed and we were unable to recover it. 00:33:50.453 [2024-09-30 23:02:17.224787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.453 [2024-09-30 23:02:17.224815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.453 qpair failed and we were unable to recover it. 00:33:50.453 [2024-09-30 23:02:17.225157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.453 [2024-09-30 23:02:17.225186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.453 qpair failed and we were unable to recover it. 00:33:50.453 [2024-09-30 23:02:17.225533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.453 [2024-09-30 23:02:17.225562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.453 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.225920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.225950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.226347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.226376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.226811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.226841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.227197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.227229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.227611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.227640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.228012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.228043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.228274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.454 [2024-09-30 23:02:17.228305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.454 qpair failed and we were unable to recover it. 00:33:50.454 [2024-09-30 23:02:17.228685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.228714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.228953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.228983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.229223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.229252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.229709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.229737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.230069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.230099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.230506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.230534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.455 [2024-09-30 23:02:17.230793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.455 [2024-09-30 23:02:17.230821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.455 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.231234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.456 [2024-09-30 23:02:17.231264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.456 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.231616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.456 [2024-09-30 23:02:17.231651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.456 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.232018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.456 [2024-09-30 23:02:17.232048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.456 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.232258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.456 [2024-09-30 23:02:17.232288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.456 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.232644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.456 [2024-09-30 23:02:17.232673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.456 qpair failed and we were unable to recover it. 00:33:50.456 [2024-09-30 23:02:17.232973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.233002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.233355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.233383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.233749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.233778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.234147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.234176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.234514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.234543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.234914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.234944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.235303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.235332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.235700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.235730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.236074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.457 [2024-09-30 23:02:17.236104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.457 qpair failed and we were unable to recover it. 00:33:50.457 [2024-09-30 23:02:17.236391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.236419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.236782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.236811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.237169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.237199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.237542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.237570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.237989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.238018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.238383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.238413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.458 [2024-09-30 23:02:17.238765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.458 [2024-09-30 23:02:17.238793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.458 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.239138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.239168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.239530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.239559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.239793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.239822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.240194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.240224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.240621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.240650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.459 [2024-09-30 23:02:17.240880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.459 [2024-09-30 23:02:17.240919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.459 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.241316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.241344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.460 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.241698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.241727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.460 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.241969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.242003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.460 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.242389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.242418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.460 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.242720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.242749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.460 qpair failed and we were unable to recover it. 00:33:50.460 [2024-09-30 23:02:17.243120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.460 [2024-09-30 23:02:17.243150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.243509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.243538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.243925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.243955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.244352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.244381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.244732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.244760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.245131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.245161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.461 [2024-09-30 23:02:17.245524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.461 [2024-09-30 23:02:17.245553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.461 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.245918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.245947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.246312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.246340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.246700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.246735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.247089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.247118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.247540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.247568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.247794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.247824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.248104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.248133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.248481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.248510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.248850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.248879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.249251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.249280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.249707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.249735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.250077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.250106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.250487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.250515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.250724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.250754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.251093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.251123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.251471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.251500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.251873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.251925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.252250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.252278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.252551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.252579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.252961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.252992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.253429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.253458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.253790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.253819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.254165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.254195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.254553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.254581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.254822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.254853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.255193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.255222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.255440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.255470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.255797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.255826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.256194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.256224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.256592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.256623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.256987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.257016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.462 [2024-09-30 23:02:17.257274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.462 [2024-09-30 23:02:17.257305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.462 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.257744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.257773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.258147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.258178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.258549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.258577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.258928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.258957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.259360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.259388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.259735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.259765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.260125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.260154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.260514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.260543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.261018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.261048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.261419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.261448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.261811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.261847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.262232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.262262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.262632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.262661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.262964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.262994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.263373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.263403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.263771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.263800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.264037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.264068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.264456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.264485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.264853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.264882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.265262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.265291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.265659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.265688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.265938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.265969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.266336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.266365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.266680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.266708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.267048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.267078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.267444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.267473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.267843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.267871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.268171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.268200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.268588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.268618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.268973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.269003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.269369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.269397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.269785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.269813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.270169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.270198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.463 [2024-09-30 23:02:17.270558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.463 [2024-09-30 23:02:17.270587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.463 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.270955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.270985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.271418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.271446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.271777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.271807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.272168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.272199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.272557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.272587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.272835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.272864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.273245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.273275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.273608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.273638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.273964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.273993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.274350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.274378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.274736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.274765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.274999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.275030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.275275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.275304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.275537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.275566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.275932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.275961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.276315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.276345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.276726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.276762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.276997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.277027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.277406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.277435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.277797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.277826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.278061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.278089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.278456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.278486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.278847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.278876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.279248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.279278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.279640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.279668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.280073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.280104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.464 [2024-09-30 23:02:17.280471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.464 [2024-09-30 23:02:17.280499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.464 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.280711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.280742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.281158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.281188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.281542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.281572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.281952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.281983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.282337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.282366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.282733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.282761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.283130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.283159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.283416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.283444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.283793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.283822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.284176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.284207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.284640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.284669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.285040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.285070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.285407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.285435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.285845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.285873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.286095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.286126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.286364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.286392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.286758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.286798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.287039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.287069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.287453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.287482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.287845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.287874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.288250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.288280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.288644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.288672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.289040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.289071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.289404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.289433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.289761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.289790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.290131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.290161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.290519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.290548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.290918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.290947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.291322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.291350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.291712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.291747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.292125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-09-30 23:02:17.292155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.465 qpair failed and we were unable to recover it. 00:33:50.465 [2024-09-30 23:02:17.292517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.292546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.292790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.292819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.293193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.293222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.293454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.293484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.293884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.293923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.294283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.294318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.294604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.294633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.294981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.295012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.295267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.295297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.295653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.295681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.296054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.296083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.296437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.296465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.296835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.296864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.297233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.297263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.297632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.297661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.298029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.298060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.298415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.298444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.298786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.298814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.299165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.299195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.299553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.299581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.299952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.299983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.300332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.300361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.300728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.300757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.301016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.301048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.301289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.301317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.301693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.301724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.301958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.301989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.302345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.302375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.302731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.302759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.303064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.303093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.303466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.303495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.303841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.303871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.466 qpair failed and we were unable to recover it. 00:33:50.466 [2024-09-30 23:02:17.304233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-09-30 23:02:17.304263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.304696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.304724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.305074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.305104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.305475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.305503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.305877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.305918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.306249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.306277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.306517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.306551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.306919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.306950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.307296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.307324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.307670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.307699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.308050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.308081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.308335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.308364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.308705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.308735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.309107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.309137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.309486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.309515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.309781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.309810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.310159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.310189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.310559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.310588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.310969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.310999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.311364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.311392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.311753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.311782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.312047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.312080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.312304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.312337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.312704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.312733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.313103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.313133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.313492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.313520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.313867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.313905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.314117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.314145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.314509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.314537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.314890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.314932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.315297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.315325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.315592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.315620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.315983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.316015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.316368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.467 [2024-09-30 23:02:17.316398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.467 qpair failed and we were unable to recover it. 00:33:50.467 [2024-09-30 23:02:17.316775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.316804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.317161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.317192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.317478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.317507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.317858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.317887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.318249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.318280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.318655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.318684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.319034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.319064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.319446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.319476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.319834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.319862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.320237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.320267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.320640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.320668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.321027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.321057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.321411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.321446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.321755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.321783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.322132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.322162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.322500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.322530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.322901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.322930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.323308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.323337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.323703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.323732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.324074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.324104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.324487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.324515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.324876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.324916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.325279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.325308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.325670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.325699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.326066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.326096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.326454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.326483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.326737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.326767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.327188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.327219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.468 [2024-09-30 23:02:17.327580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.468 [2024-09-30 23:02:17.327609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.468 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.327973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.328002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.328355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.328384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.328751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.328781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.329158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.329188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.329568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.329598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.329973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.330004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.330377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.330406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.330713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.330743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.331127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.331157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.331524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.331552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.331798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c00f0 is same with the state(6) to be set 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Write completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 Read completed with error (sct=0, sc=8) 00:33:50.469 starting I/O failed 00:33:50.469 [2024-09-30 23:02:17.332760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:50.469 [2024-09-30 23:02:17.333226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.333344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.333799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.333836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.334202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.334235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.334595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.334624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.335008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.335039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.335353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.335381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.335634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.469 [2024-09-30 23:02:17.335664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.469 qpair failed and we were unable to recover it. 00:33:50.469 [2024-09-30 23:02:17.336032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.336063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.336448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.336477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.336737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.336767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.337122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.337151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.337498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.337528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.337870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.337906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.338263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.338293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.338555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.338587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.338958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.338989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.339410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.339439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.339811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.339839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.340193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.340223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.340557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.340592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.340932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.340962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.341348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.341377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.341743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.341771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.342201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.342231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.342598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.342626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.342871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.342908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.343311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.343340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.343577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.343608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.343769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.343799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.344043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.344073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.344416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.344445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.344812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.344841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.345206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.345235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.345581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.345610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.345958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.345988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.470 [2024-09-30 23:02:17.346362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.470 [2024-09-30 23:02:17.346391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.470 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.346763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.346793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.347131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.347160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.347515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.347545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.347985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.348014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.348246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.348274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.348658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.348687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.349054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.349084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.349336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.349368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.349770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.349798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.350155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.350184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.350519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.350547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.350914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.350943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.351309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.351337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.351688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.351716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.352094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.352125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.352376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.352404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.352755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.352785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.353126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.353155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.353492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.353522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.353887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.353927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.354165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.354196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.354534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.354562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.354931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.354962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.355251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.355286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.355644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.355672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.356039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.356069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.356429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.356458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.356800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.356828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.357190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.357220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.357580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.471 [2024-09-30 23:02:17.357609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.471 qpair failed and we were unable to recover it. 00:33:50.471 [2024-09-30 23:02:17.357981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.358011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.358297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.358325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.358714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.358742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.359125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.359154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.359530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.359559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.359793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.359821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.360211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.360241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.360494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.360525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.360892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.360932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.361271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.361300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.361549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.361577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.362007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.362036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.362392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.362420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.362668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.362700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.363049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.363080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.363415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.363443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.363856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.363885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.364263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.364292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.364651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.364681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.364808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.364840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.365203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.365235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.365473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.365501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.365875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.365917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.366255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.366284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.366650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.366678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.367040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.367069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.367506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.367535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.367912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.367941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.368296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.368325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.368694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.368722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.369076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.369106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.369336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.369367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.369726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.472 [2024-09-30 23:02:17.369754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.472 qpair failed and we were unable to recover it. 00:33:50.472 [2024-09-30 23:02:17.370115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.370151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.370493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.370523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.370914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.370944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.371317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.371345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.371688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.371716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.372078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.372108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.372463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.372492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.372858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.372887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.373274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.373303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.373549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.373579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.373956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.373985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.374367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.374395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.374757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.374785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.375125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.375155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.375518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.375547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.375771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.375799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.376031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.376063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.376307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.376338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.376708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.376737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.377077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.377107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.377465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.377493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.377863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.377891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.378271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.378300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.378673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.378701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.378982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.379012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.379387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.379416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.379775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.379802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.380144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.380174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.380575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.380603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.380977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.381007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.381234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.381265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.473 [2024-09-30 23:02:17.381520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.473 [2024-09-30 23:02:17.381549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.473 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.381892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.381929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.382283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.382312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.382689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.382717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.383072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.383102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.383475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.383503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.383859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.383888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.384327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.384356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.384758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.384786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.385148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.385183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.385501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.385529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.385865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.385901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.386256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.386285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.386655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.386684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.386932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.386962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.387317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.387345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.387704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.387733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.387940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.387971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.388348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.388377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.388759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.388788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.389160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.389189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.389547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.389576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.389946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.389975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.390425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.390454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.390824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.390853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.391219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.391249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.391578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.391606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.391827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.391856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.392247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.392278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.392639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.392668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.393040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.393069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.474 [2024-09-30 23:02:17.393426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.474 [2024-09-30 23:02:17.393456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.474 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.393773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.393801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.394159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.394188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.394552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.394580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.394942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.394972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.395323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.395353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.395748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.395777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.396123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.396153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.396512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.396541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.396913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.396943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.397309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.397338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.397700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.397728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.398117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.398146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.398485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.398514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.398878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.398937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.399181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.399208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.399448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.399476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.399851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.399879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.400279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.400308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.400656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.401038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.401069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.401377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.401407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.401781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.401809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.402153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.402183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.402558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.402586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.402959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.402989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.403360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.403388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.475 [2024-09-30 23:02:17.403763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.475 [2024-09-30 23:02:17.403792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.475 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.404165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.404195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.404569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.404597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.404942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.404971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.405238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.405266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.405656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.405685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.406050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.406079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.406438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.406469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.406841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.406871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.407243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.407274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.407531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.407563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.407929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.407959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.408365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.408394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.408640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.408671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.409044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.409077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.409440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.409469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.409775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.409803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.410177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.410208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.410495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.410529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.410902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.410932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.411170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.411198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.411434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.411476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.411820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.411849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.412209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.412240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.412603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.412632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.412942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.412974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.413373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.413402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.413764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.413793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.414154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.414186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.414524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.414553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.414916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.414948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.415352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.415381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.415757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.415786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.416162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.416192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.416444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.416474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.476 [2024-09-30 23:02:17.416693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.476 [2024-09-30 23:02:17.416720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.476 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.417076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.417107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.417486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.417516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.417875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.417914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.418267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.418296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.418661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.418691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.419030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.419060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.419501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.419529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.419867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.419906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.420254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.420282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.420648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.420678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.420942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.420975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.421378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.421409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.421823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.421853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.422209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.422237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.422558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.422587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.422940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.422970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.423411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.423441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.423804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.423834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.424083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.424115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.424389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.424421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.424771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.424801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.425193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.425223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.425575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.425610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.425845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.425873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.426260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.426289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.426648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.426677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.427036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.427067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.427433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.427462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.427719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.427747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.428072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.428101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.428462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.428491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.428861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.477 [2024-09-30 23:02:17.428889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.477 qpair failed and we were unable to recover it. 00:33:50.477 [2024-09-30 23:02:17.429239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.429268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.429632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.429661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.430025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.430055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.430468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.430497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.430892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.430930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.431174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.431204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.431587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.431616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.431994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.432025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.432395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.432423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.432678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.432709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.433078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.433111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.433471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.433500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.433862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.433891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.434150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.434182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.434546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.434575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.434922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.434953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.435298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.435328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.435688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.435719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.436075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.436104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.436472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.436502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.436860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.436890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.437288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.437318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.437690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.437720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.438081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.438111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.438482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.438512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.438908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.438940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.439306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.439335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.439743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.439773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.440138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.440167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.440606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.440637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.440998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.441034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.441387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.478 [2024-09-30 23:02:17.441417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.478 qpair failed and we were unable to recover it. 00:33:50.478 [2024-09-30 23:02:17.441760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.441790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.442126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.442156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.442505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.442536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.442916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.442947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.443319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.443347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.443707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.443736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.443986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.444016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.444284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.444313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.444667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.444696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.445040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.445072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.445441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.445470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.445753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.445783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.446154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.446185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.446530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.446559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.446851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.446879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.447255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.447286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.447647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.447675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.448042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.448073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.448427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.448457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.448807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.448837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.479 [2024-09-30 23:02:17.449219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.479 [2024-09-30 23:02:17.449249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.479 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.449621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.449651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.450019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.450049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.450422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.450452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.450808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.450838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.451211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.451242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.451607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.451636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.451980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.452008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.452431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.452462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.452809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.452841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.453245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.453275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.453642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.453672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.454034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.454064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.454379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.454407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.454754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.454783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.455014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.455046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.455399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.455429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.455774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.455802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.456181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.456217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.456594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.456625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.456988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.457020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.457385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.457415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.457782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.457811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.458178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.458211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.458542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.458573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.458927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.753 [2024-09-30 23:02:17.458966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.753 qpair failed and we were unable to recover it. 00:33:50.753 [2024-09-30 23:02:17.459221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.459252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.459615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.459644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.460013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.460043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.460277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.460306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.460663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.460692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.461059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.461090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.461454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.461484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.461823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.461853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.462147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.462178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.462517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.462553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.462916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.462948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.463306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.463336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.463705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.463736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.464092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.464122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.464578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.464607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.464939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.464970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.465346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.465375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.465736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.465765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.465995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.466026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.466416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.466445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.466811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.466840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.467287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.467317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.467670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.467700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.468130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.468160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.468520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.468548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.468771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.468801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.469188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.469218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.469369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.469398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.469762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.469792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.470152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.470183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.470542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.470572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.470931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.470960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.471358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.471408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.471634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.471666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.472022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.472053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.472415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.472444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.472812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.472840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.473189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.473219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.754 [2024-09-30 23:02:17.473512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.754 [2024-09-30 23:02:17.473541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.754 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.473907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.473938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.474296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.474325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.474585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.474613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.474962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.474991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.475362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.475390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.475665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.475693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.476080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.476109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.476436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.476466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.476822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.476851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.477237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.477267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.477622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.477651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.478016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.478045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.478388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.478417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.478767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.478797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.479036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.479067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.479186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.479216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.479572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.479602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.480002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.480032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.480380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.480409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.480777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.480805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.481193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.481223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.481578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.481607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.481982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.482012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.482391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.482419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.482751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.482779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.483063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.483092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.483460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.483489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.483850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.483878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.484253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.484282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.484632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.484662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.485044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.485074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.485446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.485475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.485835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.485864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.486236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.486271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.486644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.486673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.487004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.487035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.487367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.487395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.487756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.487786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.755 qpair failed and we were unable to recover it. 00:33:50.755 [2024-09-30 23:02:17.488135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.755 [2024-09-30 23:02:17.488164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.488530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.488559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.488916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.488947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.489318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.489347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.489712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.489741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.490122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.490153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.490535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.490562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.490830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.490860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.491248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.491278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.491641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.491670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.492038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.492068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.492312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.492341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.492644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.492674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.492932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.492965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.493320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.493350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.493714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.493743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.494098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.494129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.494494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.494522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.494751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.494779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.495163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.495193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.495560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.495588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.495963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.495993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.496358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.496388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.496753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.496782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.497126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.497157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.497556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.497586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.497954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.498006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.498251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.498280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.498650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.498679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.499038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.499069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.499431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.499459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.499828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.499856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.500243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.500273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.500644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.500672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.500991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.501021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.501288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.501326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.501570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.501601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.501966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.501996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.502373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.756 [2024-09-30 23:02:17.502400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.756 qpair failed and we were unable to recover it. 00:33:50.756 [2024-09-30 23:02:17.502765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.502793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.503155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.503185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.503544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.503572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.503924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.503954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.504300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.504329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.504579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.504609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.504982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.505012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.505405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.505433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.505734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.505761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.506130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.506159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.506523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.506553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.506914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.506944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.507319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.507347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.507708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.507736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.507996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.508024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.508423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.508452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.508819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.508847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.509211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.509240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.509618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.509647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.510017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.510047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.510402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.510430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.510820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.510848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.511219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.511248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.511606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.511635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.511990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.512019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.512357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.512387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.512748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.512777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.513117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.513148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.513485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.513513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.513755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.513786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.514155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.514184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.514554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.757 [2024-09-30 23:02:17.514583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.757 qpair failed and we were unable to recover it. 00:33:50.757 [2024-09-30 23:02:17.514912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.514942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.515309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.515337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.515677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.515713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.516051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.516080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.516293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.516328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.516580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.516609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.516968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.516998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.517391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.517419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.517789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.517818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.518152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.518181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.518541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.518569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.518942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.518971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.519338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.519366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.519736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.519764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.520149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.520178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.520536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.520564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.520938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.520974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.521353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.521381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.521663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.521691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.521913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.521943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.522326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.522356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.522729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.522757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.523154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.523184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.523528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.523557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.523919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.523949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.524315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.524343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.524707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.524735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.525083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.525112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.525469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.525498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.525860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.525888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.526236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.526264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.526600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.526629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.526989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.527018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.527376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.527404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.527646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.527677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.528067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.528096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.528467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.528495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.528863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.528891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.758 [2024-09-30 23:02:17.529290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.758 [2024-09-30 23:02:17.529320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.758 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.529681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.529710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.530050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.530081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.530306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.530335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.530713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.530742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.531016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.531045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.531390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.531425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.531774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.531803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.532143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.532173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.532424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.532454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.532809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.532839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.533207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.533237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.533605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.533634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.533993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.534022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.534369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.534399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.534756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.534784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.535129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.535158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.535393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.535423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.535769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.535799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.536094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.536124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.536478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.536506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.536866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.536917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.537272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.537300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.537553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.537580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.537945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.537975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.538348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.538376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.538745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.538773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.539143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.539173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.539536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.539564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.539924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.539953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.540352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.540382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.540723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.540752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.540992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.541024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.541407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.541437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.541790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.541818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.542194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.542225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.542589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.542619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.542962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.542991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.543425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.759 [2024-09-30 23:02:17.543453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.759 qpair failed and we were unable to recover it. 00:33:50.759 [2024-09-30 23:02:17.543808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.543837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.544191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.544219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.544576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.544605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.544973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.545003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.545355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.545383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.545761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.545789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.546156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.546185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.546543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.546583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.546840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.546868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.547238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.547267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.547641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.547670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.548030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.548058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.548425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.548453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.548820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.548849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.549112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.549143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.549485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.549514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.549885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.549925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.550287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.550315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.550695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.550725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.551085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.551114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.551485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.551513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.551886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.551925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.552329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.552357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.552715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.552743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.553118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.553147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.553484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.553512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.553871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.553911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.554254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.554281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.554619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.554646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.555014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.555044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.555413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.555440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.555808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.555835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.556065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.556096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.556346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.556374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.556749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.556779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.556991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.557021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.557381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.557409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.557777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.557806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.760 qpair failed and we were unable to recover it. 00:33:50.760 [2024-09-30 23:02:17.558154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.760 [2024-09-30 23:02:17.558184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.558543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.558572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.558952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.558981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.559449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.559478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.559710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.559739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.560121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.560150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.560291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.560320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.560693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.560721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.561103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.561134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.561505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.561539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.561790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.561819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.562165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.562197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.562429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.562457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.562704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.562732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.563113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.563142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.563517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.563545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.563913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.563942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.564200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.564228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.564587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.564616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.564874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.564918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.565319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.565347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.565707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.565734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.566111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.566141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.566502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.566530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.566787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.566816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.567178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.567207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.567588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.567616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.568019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.568049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.568430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.568460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.568581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.568612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.568983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.569012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.569365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.569394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.569762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.569791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.570171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.570201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.570571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.570599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.570986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.571016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.571384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.571413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.571758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.571786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.572148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.761 [2024-09-30 23:02:17.572178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.761 qpair failed and we were unable to recover it. 00:33:50.761 [2024-09-30 23:02:17.572518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.572547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.572862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.572890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.573242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.573271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.573639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.573667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.574035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.574065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.574444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.574473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.574711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.574740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.575015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.575045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.575294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.575325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.575684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.575713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.576110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.576145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.576503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.576531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.576911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.576941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.577314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.577343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.577701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.577730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.578105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.578134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.578509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.578538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.578907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.578937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.579230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.579258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.579620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.579649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.580016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.580046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.580388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.580416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.580779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.580807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.581178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.581208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.581543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.581573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.581926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.581958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.582349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.582377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.582736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.582764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.583151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.762 [2024-09-30 23:02:17.583181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.762 qpair failed and we were unable to recover it. 00:33:50.762 [2024-09-30 23:02:17.583515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.583543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.583913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.583942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.584293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.584321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.584692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.584720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.585091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.585121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.585474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.585502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.585748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.585780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.586146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.586175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.586525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.586555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.586907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.586937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.587221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.587249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.587496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.587527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.587884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.587923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.588180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.588210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.588569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.588598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.588862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.588890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.589131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.589160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.589480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.589508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.589843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.589873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.590235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.590264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.590626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.590654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.590885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.590923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.591314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.591343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.591739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.591768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.592037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.592066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.592447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.592475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.592839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.592867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.593245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.593274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.593611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.593639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.594014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.594046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.594232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.594262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.594622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.594651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.595029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.595058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.595372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.595401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.595770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.595798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.596139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.596170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.596519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.596547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.596771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.763 [2024-09-30 23:02:17.596802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.763 qpair failed and we were unable to recover it. 00:33:50.763 [2024-09-30 23:02:17.597171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.597200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.597568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.597596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.598041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.598071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.598452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.598481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.598867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.598902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.599145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.599173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.599603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.599632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.599964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.599994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.600363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.600391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.600754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.600783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.601040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.601080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.601335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.601367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.601741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.601769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.602166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.602196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.602558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.602587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.602848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.602875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.603271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.603300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.603658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.603687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.604053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.604082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.604466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.604495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.604916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.604944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.605291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.605320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.605680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.605709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.606075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.606105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.606471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.606501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.606945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.606974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.607302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.607330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.607694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.607723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.608095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.608124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.608499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.608528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.608887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.608937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.609275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.609304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.609675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.609703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.764 [2024-09-30 23:02:17.609964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.764 [2024-09-30 23:02:17.609993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.764 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.610355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.610384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.610753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.610781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.611150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.611179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.611531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.611560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.611930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.611959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.612361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.612389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.612757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.612787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.613128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.613157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.613515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.613544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.613911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.613941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.614317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.614345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.614566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.614597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.614955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.614984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.615352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.615381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.615640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.615668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.616045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.616074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.616317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.616354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.616705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.616734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.617051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.617079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.617443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.617471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.617838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.617868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.618233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.618261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.618632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.618660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.619017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.619047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.619429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.619457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.619813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.619841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.620199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.620229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.620596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.620625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.620985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.621014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.621400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.621429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.621792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.621822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.622159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.622188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.622424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.622452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.622671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.622700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.623060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.623089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.623331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.623360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.623724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.623752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.624114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.765 [2024-09-30 23:02:17.624145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.765 qpair failed and we were unable to recover it. 00:33:50.765 [2024-09-30 23:02:17.624432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.624460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.624702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.624730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.625075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.625105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.625467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.625496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.625836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.625864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.626242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.626272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.626636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.626664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.627026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.627056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.627427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.627456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.627822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.627851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.628213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.628242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.628598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.628626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.628839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.628868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.629274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.629303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.629666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.629694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.630069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.630100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.630479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.630508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.630872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.630921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.631285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.631323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.631658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.631686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.632026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.632056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.632390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.632420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.632781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.632809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.633154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.633184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.633559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.633587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.633950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.633980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.634355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.634383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.634621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.634652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.634913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.634943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.635224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.635251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.635599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.635627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.636000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.636030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.636380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.636409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.636766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.636795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.637214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.766 [2024-09-30 23:02:17.637244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.766 qpair failed and we were unable to recover it. 00:33:50.766 [2024-09-30 23:02:17.637590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.637618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.637979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.638009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.638338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.638367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.638735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.638764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.639126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.639156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.639535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.639565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.639805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.639833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.640166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.640194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.640562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.640591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.640939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.640969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.641359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.641388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.641759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.641788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.642159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.642188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.642476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.642504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.642876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.642912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.643273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.643300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.643681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.643709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.643976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.644005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.644383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.644411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.644640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.644669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.644998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.645028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.645251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.645279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.645639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.645668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.646045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.646081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.646421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.646449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.646822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.646850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.647090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.647120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.647493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.647522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.647877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.647917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.648247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.648275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.767 [2024-09-30 23:02:17.648539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.767 [2024-09-30 23:02:17.648567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.767 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.648919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.648949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.649214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.649245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.649517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.649545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.649770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.649798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.650149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.650179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.650541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.650570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.650940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.650970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.651336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.651364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.651731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.651760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.652129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.652158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.652515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.652544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.652912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.652941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.653347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.653374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.653618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.653646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.654007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.654037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.654381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.654410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.654755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.654784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.655127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.655157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.655482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.655510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.655881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.655918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.656277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.656305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.656681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.656710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.657132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.657162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.657515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.657546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.657917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.657949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.658314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.658343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.658707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.658736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.659105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.659136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.659473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.659502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.659862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.659892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.660274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.660303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.660538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.660569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.660949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.660985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.661351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.661381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.661742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.661771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.662193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.662223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.662427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.662454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.662817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.768 [2024-09-30 23:02:17.662848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.768 qpair failed and we were unable to recover it. 00:33:50.768 [2024-09-30 23:02:17.663219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.663252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.663587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.663616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.663983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.664014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.664358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.664387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.664758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.664787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.665143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.665172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.665535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.665565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.665935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.665966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.666358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.666387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.666751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.666782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.667147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.667177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.667535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.667565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.667924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.667953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.668309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.668338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.668775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.668805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.669040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.669072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.669445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.669475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.669846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.669875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.670233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.670264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.670611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.670641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.671040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.671071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.671430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.671459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.671824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.671852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.672221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.672251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.672482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.672511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.672885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.672926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.673269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.673299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.673663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.673692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.674032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.674062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.674447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.674476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.674721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.674753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.675085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.675116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.675494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.675524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.675832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.769 [2024-09-30 23:02:17.675861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.769 qpair failed and we were unable to recover it. 00:33:50.769 [2024-09-30 23:02:17.676232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.676269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.676612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.676640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.677003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.677033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.677366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.677397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.677746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.677774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.678121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.678152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.678494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.678523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.678845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.678874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.679089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.679118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.679479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.679507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.679751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.679783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.680050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.680079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.680429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.680460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.680833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.680862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.681241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.681271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.681645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.681674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.682044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.682074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.682423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.682453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.682701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.682731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.683156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.683187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.683548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.683576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.683836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.683864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.684260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.684291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.684654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.684683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.685048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.685079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.685438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.685466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.685831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.685860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.686222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.686253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.686612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.686640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.687020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.687051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.687389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.687417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.687750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.687779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.687929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.687960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.688241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.688271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.688619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.688647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.689008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.689040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.689409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.689438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.689699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.689731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.770 qpair failed and we were unable to recover it. 00:33:50.770 [2024-09-30 23:02:17.690008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.770 [2024-09-30 23:02:17.690038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.690405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.690435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.690796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.690832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.691224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.691256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.691625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.691655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.692017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.692049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.692411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.692441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.692808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.692836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.693102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.693134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.693487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.693517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.693877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.693917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.694275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.694303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.694675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.694704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.695050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.695081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.695421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.695451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.695811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.695840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.696134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.696164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.696505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.696533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.696916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.696948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.697239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.697270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.697484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.697514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.697917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.697947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.698274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.698303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.698652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.698680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.698987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.699018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.699464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.699492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.699851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.699881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.700229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.700258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.700624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.700653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.700915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.700946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.701297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.701326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.701693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.701722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.702097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.702127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.771 [2024-09-30 23:02:17.702496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.771 [2024-09-30 23:02:17.702524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.771 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.702884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.702939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.703382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.703412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.703831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.703860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.704272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.704302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.704666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.704695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.705057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.705086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.705441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.705471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.705850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.705880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.706169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.706206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.706556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.706586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.706833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.706862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.707269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.707301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.707641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.707669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.707811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.707841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.708302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.708332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.708584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.708615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.709016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.709046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.709415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.709446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.709801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.709830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.710215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.710244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.710584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.710614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.710982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.711013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.711382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.711411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.711785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.711814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.712223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.712253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.712610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.712639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.713014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.713045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.713408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.713438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.713801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.713829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.714204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.714234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.714600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.714629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.714939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.714969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.715358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.715387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.715760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.715790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.716156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.716185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.716450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.716479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.716830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.716858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.772 qpair failed and we were unable to recover it. 00:33:50.772 [2024-09-30 23:02:17.717109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.772 [2024-09-30 23:02:17.717140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.717526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.717554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.717924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.717954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.718303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.718332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.718695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.718723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.719075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.719104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.719472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.719500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.719870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.719922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.720316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.720345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.720723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.720751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.721120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.721149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.721503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.721537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.721882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.721922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.722331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.722358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.722620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.722648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.723022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.723051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.723429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.723457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.723840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.723867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.724261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.724291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.724669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.724696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.725049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.725078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.725350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.725378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.725692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.725720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.726114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.726143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.726531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.726561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.726930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.726962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.727345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.727374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.727756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.727785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.728039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.728069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.728425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.728452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.728819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.728849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.729142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.729171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.729506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.729535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.729908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.729941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.730352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.730381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.730598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.730628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.730883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.730923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.731187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.731215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.731590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.773 [2024-09-30 23:02:17.731619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.773 qpair failed and we were unable to recover it. 00:33:50.773 [2024-09-30 23:02:17.731983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.732013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.732387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.732415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.732835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.732864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.733251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.733280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.733703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.733732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.734100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.734130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.734495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.734525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.734772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.734801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.735061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.735090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.735457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.735487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.735866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.735905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.736277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.736306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.736671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.736705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.736966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.736994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.737247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.737276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.737664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.737693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.738085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.738113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.738505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.738533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.738908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.738938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.739217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.739247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.739626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.739655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.740020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.740049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.740414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.740442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.740823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.740851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.741195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.741225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.741579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.741607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.741978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.742009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.742398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.742426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.742780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.742808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.743212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.743241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.743603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.743631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.743984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.744014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.744392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.744420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.744773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.744802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.745172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.745202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.745559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.745588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.745952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.745981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.746362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.746392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.774 qpair failed and we were unable to recover it. 00:33:50.774 [2024-09-30 23:02:17.746754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.774 [2024-09-30 23:02:17.746782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.747170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.747200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.747441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.747472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.747710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.747739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.748094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.748124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.748472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.748501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.748877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.748915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.749339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.749368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.749585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.749614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.749977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.750007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.750375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.750403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.750759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.750787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.751218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.751247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.751594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.751623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.751991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.752027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.752452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.752480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.752824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.752853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.753205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.753235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.753593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.753621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.753926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.753954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.754206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.754234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.754637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.754665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.755007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.755036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.755400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.755428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.755838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.755867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.756249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.756278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.756641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.756670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.757022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.775 [2024-09-30 23:02:17.757052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:50.775 qpair failed and we were unable to recover it. 00:33:50.775 [2024-09-30 23:02:17.757430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.757459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.757835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.757866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.758134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.758165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.758501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.758531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.758906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.758936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.759298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.759328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.759653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.759681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.760033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.760063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.760323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.760352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.760575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.760605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.760939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.760968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.761349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.761378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.761744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.049 [2024-09-30 23:02:17.761772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.049 qpair failed and we were unable to recover it. 00:33:51.049 [2024-09-30 23:02:17.762140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.762171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.762419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.762449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.762819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.762847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.763251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.763281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.763623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.763651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.764016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.764045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.764403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.764431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.764803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.764830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.765170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.765200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.765458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.765490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.765846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.765875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.766255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.766283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.766646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.766675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.767049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.767085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.767445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.767472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.767837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.767865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.768285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.768315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.768679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.768708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.769074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.769102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.769495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.769523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.769775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.769805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.770091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.770124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.770471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.770500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.770872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.770911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.771165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.771193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.771549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.771576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.771952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.771982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.772353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.772382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.772727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.772755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.773200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.773230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.773590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.773618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.773975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.774004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.774267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.774297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.774674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.774703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.775049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.775079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.775423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.775453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.775686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.775715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.776077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.776106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.050 [2024-09-30 23:02:17.776475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.050 [2024-09-30 23:02:17.776504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.050 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.776871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.776910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.777245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.777275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.777640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.777668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.778031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.778060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.778334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.778362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.778713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.778742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.779097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.779127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.779380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.779409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.779770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.779799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.780139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.780169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.780538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.780566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.780946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.780975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.781343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.781371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.781752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.781780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.782112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.782142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.782510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.782539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.782798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.782825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.783185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.783214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.783490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.783518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.783980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.784009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.784352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.784382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.784720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.784749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.785105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.785134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.785496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.785524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.785881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.785931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.786263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.786292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.786541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.786569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.786812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.786840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.787083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.787115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.787444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.787473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.787814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.787844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.788185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.788214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.788577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.788605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.788971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.789001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.789355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.789383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.789753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.789782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.790139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.790168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.051 [2024-09-30 23:02:17.790529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.051 [2024-09-30 23:02:17.790558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.051 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.790942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.790971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.791218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.791246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.791668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.791696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.792100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.792136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.792367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.792398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.792775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.792803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.793143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.793173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.793550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.793579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.793831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.793859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.794131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.794162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.794529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.794558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.794968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.794997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.795352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.795380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.795756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.795785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.796133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.796163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.796451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.796479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.796673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.796701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.796990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.797020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.797379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.797409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.797851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.797880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.798151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.798180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.798541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.798569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.798941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.798971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.799352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.799380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.799748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.799776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.800005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.800033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.800260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.800293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.800655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.800683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.801051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.801081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.801442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.801471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.801839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.801868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.802272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.802303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.802543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.802572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.802804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.802832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.803077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.803107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.803476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.803504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.803842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.803871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.804294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.052 [2024-09-30 23:02:17.804323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.052 qpair failed and we were unable to recover it. 00:33:51.052 [2024-09-30 23:02:17.804685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.804714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.805074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.805103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.805470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.805498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.805867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.805922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.806251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.806279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.806645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.806680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.807032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.807061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.807294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.807325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.807578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.807606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.807990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.808019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.808386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.808415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.808774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.808802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.809060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.809089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.809435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.809463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.809827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.809855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.810297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.810327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.810570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.810600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.810952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.810981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.811318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.811347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.811710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.811739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.812179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.812208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.812538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.812567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.812936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.812965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.813334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.813362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.813761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.813789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.814009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.814039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.814396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.814425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.814792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.814820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.815189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.815218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.815458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.815486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.815837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.815864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.816253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.816282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.816617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.816646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.817094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.817124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.817458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.817487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.817852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.817881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.818261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.818291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.818570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.053 [2024-09-30 23:02:17.818601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.053 qpair failed and we were unable to recover it. 00:33:51.053 [2024-09-30 23:02:17.818939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.818968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.819349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.819378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.819738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.819767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.820018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.820051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.820395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.820425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.820807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.820835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.821200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.821230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.821526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.821561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.821928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.821959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.822306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.822336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.822705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.822734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.823104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.823133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.823480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.823508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.823913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.823943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.824319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.824347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.824714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.824744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.825109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.825139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.825519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.825547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.825911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.825940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.826233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.826262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.826634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.826662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.827033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.827062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.827390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.827418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.827781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.827810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.828163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.828192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.828548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.828576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.828941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.828970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.829417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.829445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.829768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.829796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.830178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.830208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.830552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.830581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.830799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.830829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.054 [2024-09-30 23:02:17.831181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.054 [2024-09-30 23:02:17.831212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.054 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.831579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.831607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.831971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.832001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.832365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.832394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.832755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.832783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.833122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.833151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.833500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.833528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.833910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.833940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.834286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.834315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.834756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.834784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.835155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.835184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.835546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.835574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.835948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.835976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.836354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.836382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.836749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.836785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.837027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.837061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.837439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.837468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.837827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.837857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.838148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.838178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.838555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.838584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.838956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.838986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.839354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.839391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.839767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.839795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.840134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.840164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.840429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.840458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.840730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.840760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.841121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.841150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.841514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.841542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.841979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.842008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.842422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.842451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.842801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.842830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.843081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.843113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.843553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.843582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.843918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.843948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.844350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.844379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.844745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.844774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.845144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.845173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.845531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.845560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.845929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.055 [2024-09-30 23:02:17.845959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.055 qpair failed and we were unable to recover it. 00:33:51.055 [2024-09-30 23:02:17.846360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.846387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.846741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.846770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.847121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.847150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.847416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.847444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.847783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.847813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.848180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.848210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.848565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.848594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.848956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.848985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.849359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.849388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.849730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.849759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.850008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.850037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.850402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.850431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.850738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.850766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.851016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.851045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.851428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.851456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.851825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.851853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.852218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.852254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.852614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.852643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.853006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.853036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.853394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.853423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.853789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.853818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.854190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.854219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.854574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.854602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.854857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.854884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.855284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.855313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.855673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.855701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.856060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.856089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.856464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.856493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.856867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.856903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.857308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.857336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.857580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.857608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.857963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.857992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.858336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.858364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.858720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.858749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.859122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.859152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.859513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.859541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.859779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.859811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.860175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.860204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.056 [2024-09-30 23:02:17.860572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.056 [2024-09-30 23:02:17.860601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.056 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.860851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.860882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.861292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.861321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.861692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.861720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.862095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.862126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.862497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.862526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.862753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.862784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.863151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.863181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.863580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.863608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.864013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.864043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.864295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.864324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.864680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.864709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.865130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.865160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.865528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.865556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.865927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.865956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.866313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.866341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.866708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.866736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.867121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.867150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.867517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.867552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.867912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.867942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.868302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.868331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.868569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.868600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.868976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.869006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.869373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.869401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.869735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.869764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.870114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.870143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.870504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.870532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.870905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.870934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.871301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.871330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.871692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.871720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.872076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.872105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.872363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.872395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.872750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.872779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.873133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.873162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.873464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.873493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.873865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.873903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.874264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.874292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.874542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.874572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.874941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.057 [2024-09-30 23:02:17.874971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.057 qpair failed and we were unable to recover it. 00:33:51.057 [2024-09-30 23:02:17.875329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.875357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.875735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.875764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.876012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.876044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.876257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.876288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.876641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.876669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.876919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.876948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.877230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.877260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.877618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.877648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.877910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.877943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.878324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.878353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.878721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.878749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.879118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.879148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.879513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.879541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.879911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.879941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.880297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.880325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.880674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.880702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.881085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.881116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.881337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.881367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.881715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.881743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.882115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.882151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.882514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.882542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.882913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.882942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.883303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.883332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.883705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.883733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.884115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.884144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.884503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.884531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.884658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.884689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.885079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.885110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.885510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.885538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.885890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.885928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.886264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.886292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.886654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.886682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.887042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.887071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.887433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.887462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.058 [2024-09-30 23:02:17.887707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.058 [2024-09-30 23:02:17.887737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.058 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.887955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.887986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.888368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.888398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.888833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.888862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.889200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.889229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.889588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.889617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.889963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.889993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.890361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.890390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.890762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.890791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.891140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.891169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.891535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.891564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.891944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.891974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.892344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.892374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.892737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.892765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.893123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.893152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.893402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.893430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.893787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.893815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.894170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.894199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.894537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.894566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.894935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.894964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.895339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.895368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.895741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.895769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.896141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.896170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.896389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.896418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.896772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.896802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.897165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.897202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.897407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.897436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.897904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.897934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.898371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.898399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.898728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.898757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.899111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.899140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.899505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.899535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.899763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.899794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.900177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.900207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.900466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.900494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.900725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.900754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.901132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.901162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.901527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.901556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.901918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.059 [2024-09-30 23:02:17.901947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.059 qpair failed and we were unable to recover it. 00:33:51.059 [2024-09-30 23:02:17.902307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.902336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.902574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.902603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.902981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.903010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.903406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.903435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.903796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.903824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.904200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.904230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.904592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.904621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.904872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.904926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.905344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.905373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.905736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.905764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.906187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.906216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.906548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.906577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.907020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.907050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.907416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.907445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.907791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.907819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.908091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.908121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.908387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.908416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.908761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.908790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.909180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.909210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.909566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.909594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.909965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.909997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.910336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.910364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.910749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.910778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.911153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.911184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.911550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.911579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.911951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.911982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.912351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.912388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.912731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.912766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.913135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.913165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.913541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.913570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.913938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.913967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.914341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.914372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.914734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.914764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.915023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.915053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.915305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.915335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.915700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.915729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.916102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.916132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.916475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.916505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.060 [2024-09-30 23:02:17.916867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.060 [2024-09-30 23:02:17.916909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.060 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.917266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.917294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.917661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.917692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.917942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.917975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.918244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.918277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.918627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.918657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.919032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.919062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.919393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.919422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.919794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.919823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.920193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.920222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.920604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.920635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.920883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.920936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.921159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.921193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.921595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.921625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.922010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.922040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.922414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.922444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.922806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.922835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.923184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.923214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.923576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.923606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.923973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.924004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.924356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.924386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.924828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.924857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.925275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.925304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.925657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.925686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.925952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.925982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.926311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.926341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.926704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.926733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.926979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.927009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.927385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.927420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.927765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.927796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.928153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.928185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.928545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.928574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.928833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.928864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.929259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.929290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.929673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.929702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.930064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.930096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.930459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.930487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.930853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.930881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.931267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.931296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.061 qpair failed and we were unable to recover it. 00:33:51.061 [2024-09-30 23:02:17.931544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.061 [2024-09-30 23:02:17.931572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.931932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.931963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.932306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.932337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.932702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.932731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.933103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.933135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.933535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.933564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.933923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.933955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.934284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.934312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.934561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.934591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.934943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.934974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.935235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.935264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.935630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.935660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.936062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.936092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.936453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.936483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.936842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.936871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.937152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.937181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.937572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.937602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.937976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.938007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.938436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.938465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.938815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.938847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.939210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.939240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.939604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.939633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.939977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.940006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.940374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.940403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.940771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.940799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.941169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.941199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.941548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.941578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.941990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.942019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.942271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.942300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.942663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.942700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.943044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.943073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.943423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.943452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.943768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.943796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.944049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.944082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.944460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.944491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.944848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.944879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.945257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.945287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.945648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.945678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.062 [2024-09-30 23:02:17.946044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.062 [2024-09-30 23:02:17.946076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.062 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.946292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.946322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.946753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.946783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.947114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.947146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.947561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.947590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.947988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.948020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.948244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.948273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.948621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.948650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.949024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.949054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.949414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.949443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.949663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.949695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.950048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.950080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.950462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.950493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.950839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.950868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.951233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.951263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.951632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.951660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.952026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.952056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.952314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.952342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.952709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.952738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.953038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.953068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.953439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.953468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.953830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.953860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.954230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.954262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.954622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.954651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.955023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.955053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.955393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.955422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.955712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.955741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.956011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.956041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.956405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.956445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.956799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.956829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.957175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.957205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.957586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.957622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.957937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.957967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.063 [2024-09-30 23:02:17.958220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.063 [2024-09-30 23:02:17.958252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.063 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.958617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.958647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.959012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.959044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.959415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.959444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.959814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.959842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.960212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.960242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.960600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.960629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.960995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.961025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.961447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.961476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.961836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.961864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.962209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.962239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.962622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.962651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.963016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.963046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.963398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.963427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.963799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.963828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.964165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.964194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.964554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.964583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.964953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.964983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.965429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.965457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.965769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.965797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.966156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.966186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.966538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.966567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.966812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.966842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.967221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.967251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.967556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.967584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.967949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.967979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.968343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.968373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.968709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.968738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.969102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.969132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.969505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.969533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.969879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.969918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.970258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.970287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.970632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.970660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.971060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.971090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.971454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.971482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.971863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.971892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.972250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.972280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.972556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.972584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.972947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.064 [2024-09-30 23:02:17.972977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.064 qpair failed and we were unable to recover it. 00:33:51.064 [2024-09-30 23:02:17.973211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.973239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.973695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.973723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.974078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.974109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.974476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.974505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.974754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.974782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.975081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.975111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.975467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.975496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.975865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.975904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.976267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.976296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.976654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.976683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.977048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.977078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.977340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.977368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.977616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.977645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.978038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.978068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.978430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.978458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.978815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.978844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.979209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.979239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.979581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.979609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.979919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.979948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.980330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.980359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.980733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.980761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.981130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.981159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.981530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.981558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.981937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.981968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.982351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.982380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.982794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.982822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.983159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.983194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.983521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.983550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.983918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.983947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.984318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.984347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.984721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.984749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.985195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.985225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.985487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.985515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.985869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.985906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.986279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.986307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.986688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.986724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.987078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.987108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.987466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.065 [2024-09-30 23:02:17.987494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.065 qpair failed and we were unable to recover it. 00:33:51.065 [2024-09-30 23:02:17.987861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.987888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.988268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.988297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.988672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.988700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.989050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.989080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.989434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.989463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.989822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.989851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.990196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.990225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.990598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.990627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.991025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.991055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.991316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.991344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.991596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.991624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.991985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.992015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.992382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.992410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.992772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.992800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.993147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.993176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.993558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.993587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.993953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.993982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.994221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.994253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.994583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.994612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.994981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.995010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.995404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.995433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.995804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.995832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.996204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.996233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.996595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.996623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.996989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.997018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.997365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.997394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.997746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.997774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.998153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.998182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.998548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.998587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.999018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.999048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.999384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.999412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:17.999797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:17.999826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.000181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.000211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.000485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.000514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.000878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.000916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.001288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.001317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.001565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.001597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.001965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.001995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.066 [2024-09-30 23:02:18.002320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.066 [2024-09-30 23:02:18.002348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.066 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.002764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.002793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.003167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.003198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.003552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.003581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.003834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.003864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.004042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.004073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.004488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.004517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.004871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.004932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.005339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.005367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.005693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.005722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.006098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.006128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.006412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.006440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.006648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.006677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.006940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.006969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.007335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.007363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.007727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.007756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.008041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.008070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.008431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.008459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.008822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.008850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.009217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.009246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.009591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.009620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.009930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.009959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.010331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.010359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.010725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.010753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.011124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.011154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.011528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.011556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.011936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.011966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.012328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.012355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.012710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.012738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.012950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.012981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.013217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.013255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.013629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.013657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.013992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.014022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.014388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.014417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.067 [2024-09-30 23:02:18.014774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.067 [2024-09-30 23:02:18.014802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.067 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.015220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.015250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.015618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.015645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.016015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.016044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.016210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.016241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.016619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.016647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.017019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.017048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.017424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.017453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.017836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.017864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.018317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.018349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.018709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.018738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.019187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.019217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.019577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.019606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.019971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.020002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.020437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.020465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.020831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.020860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.021206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.021235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.021601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.021630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.021997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.022026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.022400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.022429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.022792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.022820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.023062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.023093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.023486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.023515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.023759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.023791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.024161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.024191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.024555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.024585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.024961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.024991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.025356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.025384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.025753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.025782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.026128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.026158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.026518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.026546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.026779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.026809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.027192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.027221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.027610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.027639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.028001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.028031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.028240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.028271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.028460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.028496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.028846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.028876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.029237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.068 [2024-09-30 23:02:18.029268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.068 qpair failed and we were unable to recover it. 00:33:51.068 [2024-09-30 23:02:18.029618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.029648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.029915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.029945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.030169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.030201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.030559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.030588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.030995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.031025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.031385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.031413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.031665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.031693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.032140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.032170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.032499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.032527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.032735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.032764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.033130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.033160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.033385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.033414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.033794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.033823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.034194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.034224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.034560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.034588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.034950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.034980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.035337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.035365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.035622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.035649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.035934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.035966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.036389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.036418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.036742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.036771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.037145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.037175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.037537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.037567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.037933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.037962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.038222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.038251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.038627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.038655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.039021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.039050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.039416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.039444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.039788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.039816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.040186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.040216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.040585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.040613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.040950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.040980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.041321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.041350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.041721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.041750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.041976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.042006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.042368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.042397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.042757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.042785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.043158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.043193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.069 [2024-09-30 23:02:18.043543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.069 [2024-09-30 23:02:18.043573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.069 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.043959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.043989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.044325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.044355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.044725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.044754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.045121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.045152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.045499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.045527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.045904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.045935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.046297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.046326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.046684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.046712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.046962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.046993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.047372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.047401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.047761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.047798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.048038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.048068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.048429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.048459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.048822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.048850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.049205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.049235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.049607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.049635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.049989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.050019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.050401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.050430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.050693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.050721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.051121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.051150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.051383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.051413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.051802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.051830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.052163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.052193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.070 [2024-09-30 23:02:18.052447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.070 [2024-09-30 23:02:18.052475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.070 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.052857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.052888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.053147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.053180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.053578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.053608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.053970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.054000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.054358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.054388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.054756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.054785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.055137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.055167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.055533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.055563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.055935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.055966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.056328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.056356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.056694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.056722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.056983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.057012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.057418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.057447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.057817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.057845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.058208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.058244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.058579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.058607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.058965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.058996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.059370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.059398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.059770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.059798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.348 [2024-09-30 23:02:18.060162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.348 [2024-09-30 23:02:18.060193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.348 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.060545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.060573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.060978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.061007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.061362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.061391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.061801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.061829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.062194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.062223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.062579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.062608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.062857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.062886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.063258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.063287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.063652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.063682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.064043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.064073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.064337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.064368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.064733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.064762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.065116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.065147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.065480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.065508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.065749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.065777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.066135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.066165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.066533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.066562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.066919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.066949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.067326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.067355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.067712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.067741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.068074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.068103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.068369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.068399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.068749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.068779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.069153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.069183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.069561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.069590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.069996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.070025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.070245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.070275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.070661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.070690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.070950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.070979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.071347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.071375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.071746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.071775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.072151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.072180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.072563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.072592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.072832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.072863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.073248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.073291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.073484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.073512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.073886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.073926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.074335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.349 [2024-09-30 23:02:18.074363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.349 qpair failed and we were unable to recover it. 00:33:51.349 [2024-09-30 23:02:18.074714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.074743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.075077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.075107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.075472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.075501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.075865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.075901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.076263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.076291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.076675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.076704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.077064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.077095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.077467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.077496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.077861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.077889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.078275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.078303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.078631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.078660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.079023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.079055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.079431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.079460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.079700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.079731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.080142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.080172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.080543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.080571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.080935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.080964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.081341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.081370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.081731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.081760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.082004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.082036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.082396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.082425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.082765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.082793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.083142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.083171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.083538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.083567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.083918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.083948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.084314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.084343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.084707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.084736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.085000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.085028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.085386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.085414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.085778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.085806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.086166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.086196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.086489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.086518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.086923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.086953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.087301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.087330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.087705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.087735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.088104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.088134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.088498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.088532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.088940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.088970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.350 qpair failed and we were unable to recover it. 00:33:51.350 [2024-09-30 23:02:18.089333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.350 [2024-09-30 23:02:18.089362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.089727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.089756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.090117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.090147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.090514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.090542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.090835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.090863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.091220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.091250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.091624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.091653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.092031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.092060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.092420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.092449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.092812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.092840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.093212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.093241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.093627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.093655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.093925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.093954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.094329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.094358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.094708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.094737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.095112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.095142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.095498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.095526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.095776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.095806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.096171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.096202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.096546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.096575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.096927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.096957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.097311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.097339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.097573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.097603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.097952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.097983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.098342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.098370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.098736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.098765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.099149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.099178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.099533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.099562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.099917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.099947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.100312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.100340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.100608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.100636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.100984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.101013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.101381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.351 [2024-09-30 23:02:18.101409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.351 qpair failed and we were unable to recover it. 00:33:51.351 [2024-09-30 23:02:18.101778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.101806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.102219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.102249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.102643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.102671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.103036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.103065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.103433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.103462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.103838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.103873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.104232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.104261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.104548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.104577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.104936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.104966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.105293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.105321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.105587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.105616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.105963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.105992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.106358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.106387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.106749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.106776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.107121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.107151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.107515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.107543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.107912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.107941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.108343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.108371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.108733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.108762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.109018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.109050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.109296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.109325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.109670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.109699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.110060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.110089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.110342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.110371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.110768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.110798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.111150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.111180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.111547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.111575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.111784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.111814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.112186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.112215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.112568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.112598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.112966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.112995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.113395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.113423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.113772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.113802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.114057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.114090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.114502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.114531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.114859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.114888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.115223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.115253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.115623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.115652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.352 qpair failed and we were unable to recover it. 00:33:51.352 [2024-09-30 23:02:18.116013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.352 [2024-09-30 23:02:18.116042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.116410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.116439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.116679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.116710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.117078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.117107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.117478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.117507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.117743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.117771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.118122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.118153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.118478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.118513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.118622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.118651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.118939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.118969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.119396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.119425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.119780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.119809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.120149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.120178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.120535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.120563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.120827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.120854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.121251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.121280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.121582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.121610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.121886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.121924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.122211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.122244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.122598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.122627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.123034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.123064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.123412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.123442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.123808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.123836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.124205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.124235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.124621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.124649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.124980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.125010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.125235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.125266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.125635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.125663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.126022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.126053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.126301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.126330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.126681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.126718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.353 [2024-09-30 23:02:18.127047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.353 [2024-09-30 23:02:18.127076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.353 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.127445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.127474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.127841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.127869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.128242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.128272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.128628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.128658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.129016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.129046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.129418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.129447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.129737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.129765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.130039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.130069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.130419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.130448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.130803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.130832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.131188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.131225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.131575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.131603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.131839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.131870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.132119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.132148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.132531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.132559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.132937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.132974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.133316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.133344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.133629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.133657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.133918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.133950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.134198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.134227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.134568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.134597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.134966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.134996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.135365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.135394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.135728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.135756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.136129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.136158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.136405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.136433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.136672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.136703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.137055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.137084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.137445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.137474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.137844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.137874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.138243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.138273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.138635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.138664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.139031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.139061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.354 qpair failed and we were unable to recover it. 00:33:51.354 [2024-09-30 23:02:18.139491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.354 [2024-09-30 23:02:18.139519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.139884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.139922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.140291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.140320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.140655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.140683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.141051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.141081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.141448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.141476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.141813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.141841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.142276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.142305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.142666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.142694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.143051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.143082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.143450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.143480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.143858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.143886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.144167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.144196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.144544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.144573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 899285 Killed "${NVMF_APP[@]}" "$@" 00:33:51.355 [2024-09-30 23:02:18.144964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.144994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.145357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.145385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:51.355 [2024-09-30 23:02:18.145812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.145841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:51.355 [2024-09-30 23:02:18.146204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.146233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:51.355 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:51.355 [2024-09-30 23:02:18.146680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.146709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:51.355 [2024-09-30 23:02:18.147054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.147085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.147445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.147473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.147843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.147872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.148294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.148325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.148683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.148711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.148981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.149011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.149354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.149385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.149721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.149750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.150125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.150156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.150526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.150557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.150891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.150941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.151291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.151321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.151654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.151685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.152045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.152079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.152438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.152468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.152694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.152723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.153085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.355 [2024-09-30 23:02:18.153115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.355 qpair failed and we were unable to recover it. 00:33:51.355 [2024-09-30 23:02:18.153461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.153489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.153889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.153929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.154300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.154330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.154705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.154735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.155101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.155132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.155541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.155570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=900163 00:33:51.356 [2024-09-30 23:02:18.155939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.155970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 900163 00:33:51.356 [2024-09-30 23:02:18.156372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.156405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 900163 ']' 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.356 [2024-09-30 23:02:18.156767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.156801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:51.356 23:02:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:51.356 [2024-09-30 23:02:18.160770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.160840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.161336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.161384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.161792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.161830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.162217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.162259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.162659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.162700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.163087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.163125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.163518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.163556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.163983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.164020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.164290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.164325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.164699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.164736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.165141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.165181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.165532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.165567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.165932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.165970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.166383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.166418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.166703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.166743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.167083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.167120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.167392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.167426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.167840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.167877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.168280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.168316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.168740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.168768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.169158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.169187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.169558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.169588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.169822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.169850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.170232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.170267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.170630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.170657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.356 qpair failed and we were unable to recover it. 00:33:51.356 [2024-09-30 23:02:18.171095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.356 [2024-09-30 23:02:18.171126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.171506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.171534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.171914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.171942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.173916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.173959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.174381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.174410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.174672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.174698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.175077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.175106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.175471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.175497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.175989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.176021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.176295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.176319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.176677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.176702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.177057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.177087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.177456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.177480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.177855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.177880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.178238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.178263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.178528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.178555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.178913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.178940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.179339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.179373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.179664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.179704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.180000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.180041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.180407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.180440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.180795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.180824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.181216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.181245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.181479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.181506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.181768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.181794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.182057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.182085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.182422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.182454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.182789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.182820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.183087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.183114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.183418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.183445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.183813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.183839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.184213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.184247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.184579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.357 [2024-09-30 23:02:18.184605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.357 qpair failed and we were unable to recover it. 00:33:51.357 [2024-09-30 23:02:18.184971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.185000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.185225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.185252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.185616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.185642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.186013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.186040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.186416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.186445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.186806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.186846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.187103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.187130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.187491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.187518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.187734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.187757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.188092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.188120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.188465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.188492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.188859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.188883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.189434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.189461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.189841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.189866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.190192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.190217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.190570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.190594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.190954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.190981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.191326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.191349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.191718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.191742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.192111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.192146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.192512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.192544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.192911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.192946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.193176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.193213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.193582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.193613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.193981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.194017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.194385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.194421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.194806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.194845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.195314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.195349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.195747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.195778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.196017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.196053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.196389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.196425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.196810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.196845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.197203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.197240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.197626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.197659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.198022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.198057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.198411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.198443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.198830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.198865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.199315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.199351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.358 qpair failed and we were unable to recover it. 00:33:51.358 [2024-09-30 23:02:18.199739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.358 [2024-09-30 23:02:18.199774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.200145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.200191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.200557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.200590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.200955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.200993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.201365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.201400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.201815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.201854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.202294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.202333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.202723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.202769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.203039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.203066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.203425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.203461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.203818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.203843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.204093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.204121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.204467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.204497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.204879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.204931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.205301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.205338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.205742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.205770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.206140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.206168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.206538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.206562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.206874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.206908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.207269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.207304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.207656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.207685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.207971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.208000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.208380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.208404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.208756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.208786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.209153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.209188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.209416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.209446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.209817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.209850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.210242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.210279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.210643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.210668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.211045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.211072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.211426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.211454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.211850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.211877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.212133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.212159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.212521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.212554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.212805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.212833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.213216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.213246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.213599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.213627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.213999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.214027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.214395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.359 [2024-09-30 23:02:18.214425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.359 qpair failed and we were unable to recover it. 00:33:51.359 [2024-09-30 23:02:18.214798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.214824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.215177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.215210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.215554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.215581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.215741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.215765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.216091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.216121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.216374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.216396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.216660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.216688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.217022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.217047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.217421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.217443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.217837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.217860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.218248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.218279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.218647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.218675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.218905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.218929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.219283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.219303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.219710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.219736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.220038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.220060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.220406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.220428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.220679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.220698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.220839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.220856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.221419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.221522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.222150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.222254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.222729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.222768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.223346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.223449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.223916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.223956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.224227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.224263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.224638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.224670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.225169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.225274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.225668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.225716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.226084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.226107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.226326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.226347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.226725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.226751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.226777] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:33:51.360 [2024-09-30 23:02:18.226832] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.360 [2024-09-30 23:02:18.227079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.227101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.227468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.227486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.227832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.227853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.228190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.228211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.228556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.228574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.228838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.228858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.360 [2024-09-30 23:02:18.229187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.360 [2024-09-30 23:02:18.229207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.360 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.229511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.229532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.229876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.229902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.230251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.230272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.230613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.230630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.230817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.230835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.231109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.231128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.231474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.231490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.231883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.231909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.232163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.232179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.232488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.232518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.232883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.232918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.233052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.233069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.233415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.233432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.233775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.233792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.234154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.234182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.234544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.234562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.234907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.234927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.235288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.235310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.235715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.235733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.236040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.236061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.236301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.236320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.236665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.236684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.237035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.237053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.237274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.237292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.237627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.237646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.237977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.237995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.238243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.238261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.238457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.238475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.238819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.238838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.239062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.239081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.239420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.239440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.239777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.239796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.240025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.240045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.240296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.240315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.240656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.240674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.241026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.241045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.241388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.241406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.361 [2024-09-30 23:02:18.241741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.361 [2024-09-30 23:02:18.241758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.361 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.242073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.242091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.242457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.242475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.242786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.242805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.243004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.243023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.243241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.243260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.243604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.243622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.243963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.243982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.244207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.244225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.244565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.244583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.244791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.244810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.245145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.245164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.245515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.245543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.245863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.245880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.246213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.246230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.246523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.246541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.246771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.246789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.247126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.247145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.247489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.247506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.247809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.247827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.248033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.248052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.248402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.248421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.248742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.248760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.249080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.249098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.249467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.249486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.249826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.249844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.250164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.250184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.250560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.250578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.250911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.250929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.251279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.251297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.251502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.251520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.251935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.251955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.252165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.252184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.252541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.362 [2024-09-30 23:02:18.252560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.362 qpair failed and we were unable to recover it. 00:33:51.362 [2024-09-30 23:02:18.252779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.252797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.253125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.253143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.253531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.253548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.253749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.253767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.254184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.254202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.254605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.254623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.258913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.258944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.259296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.259311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.259561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.259573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.259934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.259947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.260268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.260280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.260589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.260601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.260967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.260986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.261350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.261363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.261699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.261713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.262045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.262059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.262277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.262290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.262599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.262612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.262958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.262979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.263336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.263348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.263655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.263668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.264033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.264046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.264413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.264426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.264793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.264808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.265123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.265136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.265335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.265349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.265577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.265590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.265933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.265946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.266300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.266317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.266534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.266546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.266910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.266923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.267256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.267269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.267629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.267641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.267958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.267974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.268327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.268345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.268656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.268669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.269012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.269026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.269371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.269397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.363 [2024-09-30 23:02:18.269747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.363 [2024-09-30 23:02:18.269759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.363 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.270078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.270095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.270448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.270469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.270787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.270802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.271004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.271021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.271391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.271406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.271762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.271779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.272116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.272132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.272453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.272469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.272822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.272841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.273215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.273238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.273582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.273598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.273909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.273928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.274192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.274203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.274429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.274440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.274824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.274838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.275140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.275152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.275489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.275501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.275799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.275811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.276118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.276130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.276431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.276456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.276773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.276784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.277104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.277115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.277471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.277483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.277778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.277789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.278152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.278163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.278360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.278373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.278763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.278776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.279017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.279030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.279387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.279399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.279714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.279726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.280046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.280057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.280406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.280417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.280740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.280750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.280989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.281002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.281368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.281381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.281676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.281690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.282034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.364 [2024-09-30 23:02:18.282047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.364 qpair failed and we were unable to recover it. 00:33:51.364 [2024-09-30 23:02:18.282472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.282485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.282692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.282704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.283007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.283022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.283430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.283444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.283642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.283656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.284001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.284015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.284342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.284355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.284671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.284684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.285283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.285300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.285633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.285655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.285965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.285978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.286218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.286232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.286581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.286595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.286928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.286942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.287293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.287306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.287661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.287675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.287917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.287931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.288167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.288181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.288396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.288409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.288760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.288773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.289076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.289089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.289425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.289438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.289634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.289653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.289901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.289915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.290294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.290306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.290635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.290647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.290968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.290981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.291312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.291325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.291655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.291669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.292012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.292025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.292355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.292371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.292715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.292733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.293003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.293021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.293408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.293425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.293774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.293792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.294134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.294152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.294540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.294557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.294872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.294889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.295216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.365 [2024-09-30 23:02:18.295234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.365 qpair failed and we were unable to recover it. 00:33:51.365 [2024-09-30 23:02:18.295565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.295582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.295921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.295940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.296190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.296208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.296567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.296584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.296917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.296935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.297271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.297289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.297506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.297523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.297901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.297920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.298253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.298272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.298602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.298619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.298946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.298965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.299313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.299331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.299683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.299701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.300047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.300065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.300396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.300413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.300752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.300770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.301088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.301106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.301189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.301208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.301459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.301477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.301794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.301813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.302142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.302160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.302492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.302511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.302841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.302858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.303196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.303219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.303568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.303586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.303810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.303829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.304088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.304108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.304452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.304472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.304819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.304836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.305222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.305241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.305552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.305571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.305913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.305931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.306264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.306287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.366 qpair failed and we were unable to recover it. 00:33:51.366 [2024-09-30 23:02:18.306651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.366 [2024-09-30 23:02:18.306674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.307029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.307052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.307400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.307423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.307622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.307646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.308013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.308036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.308380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.308401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.308629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.308652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.308992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.309014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.309238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.309259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.309443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.309465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.309723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.309744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.310012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.310034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.310362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.310384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.310748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.310770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.311160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.311183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.311548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.311569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.311906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.311936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.312323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.312345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.312714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.312736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.313118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.313143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.313509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.313530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.313860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.313882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.314114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.314138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.314556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.314578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.314924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.314947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.315118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.315139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.315565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.315588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.315972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.315996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.316229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.316250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.316620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.316644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.317017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.317040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.317425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.317448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.317795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.317818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.318150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.318174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.318513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.318535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.318867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.318929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.319287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.319318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.319622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.319653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.367 [2024-09-30 23:02:18.320017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.367 [2024-09-30 23:02:18.320047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.367 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.320429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.320458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.320681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.320711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.321071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.321102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.321463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.321493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.321857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.321887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.322318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.322348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.322700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.322730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.323093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.323123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.323482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.323512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.323873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.323912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.324283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.324311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.324692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.324721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.325077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.325107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.325368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.325399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 [2024-09-30 23:02:18.325398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.325761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.325790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.326182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.326212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.326574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.326602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.326972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.327001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.327403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.327432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.327705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.327733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.328095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.328126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.328489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.328518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.328740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.328772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.329113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.329143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.329515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.329544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.329926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.329956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.330390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.330422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.330769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.330799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.331161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.331191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.331553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.331581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.332027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.332058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.332430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.332467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.332707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.332737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.333128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.333159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.333531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.333560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.333796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.333827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.334222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.368 [2024-09-30 23:02:18.334252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.368 qpair failed and we were unable to recover it. 00:33:51.368 [2024-09-30 23:02:18.334615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.334644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.335010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.335040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.335458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.335487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.335705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.335734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.336124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.336153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.336537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.336568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.336937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.336967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.337256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.337285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.337626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.337656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.338019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.338052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.338416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.338446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.338817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.338848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.339206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.339237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.339608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.339638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.340042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.340074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.340411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.340441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.340819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.340847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.341224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.341254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.341591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.341620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.341936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.341968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.342295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.342325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.342697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.342727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.343093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.343123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.343481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.343509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.343906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.343937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.344280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.344309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.344684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.344712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.345073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.345103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.345460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.345489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.345870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.345913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.346160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.346188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.346557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.346586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.346961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.346992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.347371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.347400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.347653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.347693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.369 [2024-09-30 23:02:18.348049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.369 [2024-09-30 23:02:18.348078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.369 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.348444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.348475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.348840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.348869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.349235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.349265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.349632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.349661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.350041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.350069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.350415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.350443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.350821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.350851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.351197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.351228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.351592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.351621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.351920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.351951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.352215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.352244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.352590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.352617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.352954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.352986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.353225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.353253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.353610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.353640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.354004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.354035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.354407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.354435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.354791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.354821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.355158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.355189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.355435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.355463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.355810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.355839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.356206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.356236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.356470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.356500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.356881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.356925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.357268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.357298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.357656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.357685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.358047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.358079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.358326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.358355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.358720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.358748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.359137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.359167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.359550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.359578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.645 qpair failed and we were unable to recover it. 00:33:51.645 [2024-09-30 23:02:18.359957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.645 [2024-09-30 23:02:18.359988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.360237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.360266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.360629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.360658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.361014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.361043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.361425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.361453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.361733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.361761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.361980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.362012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.362372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.362408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.362772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.362801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.363165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.363195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.363633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.363661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.364033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.364063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.364299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.364330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.364694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.364722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.364955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.364984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.365361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.365391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.365762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.365791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.366151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.366181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.366559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.366589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.366842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.366873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.367268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.367300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.367676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.367706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.368076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.368107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.368469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.368497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.368870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.368909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.369136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.369166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.369544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.369574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.369964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.369995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.370357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.370387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.370732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.370761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.371112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.371143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.371579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.371609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.371956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.371986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.372335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.372367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.372725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.372757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.373122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.373153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.373510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.373539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.646 [2024-09-30 23:02:18.373923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.646 [2024-09-30 23:02:18.373954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.646 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.374316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.374347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.374584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.374613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.374972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.375005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.375356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.375388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.375764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.375794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.376045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.376076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.376444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.376475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.376845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.376875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.377261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.377290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.377661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.377700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.378047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.378081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.378458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.378488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.378858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.378890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.379235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.379266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.379624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.379655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.380020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.380052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.380311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.380342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.380713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.380744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.381110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.381142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.381500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.381531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.381981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.382014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.382366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.382405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.382726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.382757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.383021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.383052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.383329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.383359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.383588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.383618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.383846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.383878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.384292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.384321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.384574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.384603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.384986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.385017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.385330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.385359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.385732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.385761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.386151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.386182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.647 qpair failed and we were unable to recover it. 00:33:51.647 [2024-09-30 23:02:18.386434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.647 [2024-09-30 23:02:18.386462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.386843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.386872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.388227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.388322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.388698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.388729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.389091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.389131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.389464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.389494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.389857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.389888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.390262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.390291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.390535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.390566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.390919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.390952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.391325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.391354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.391726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.391755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.391995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.392025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.392407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.392435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.392779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.392808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.393105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.393136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.393506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.393542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.393886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.393927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.394285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.394315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.394650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.394679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.395051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.395081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.395319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.395350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.395718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.395748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.395966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.395996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.396368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.396397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.396764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.396793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.397145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.397174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.397408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.397436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.397788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.397819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.398068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.398097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.398515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.398544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.398910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.398941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.399308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.399336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.399709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.399739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.400081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.400112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.400467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.400496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.400755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.400782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.401154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.401185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.648 [2024-09-30 23:02:18.401429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.648 [2024-09-30 23:02:18.401458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.648 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.401828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.401857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.402233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.402264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.402631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.402659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.403015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.403045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.403486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.403515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.403909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.403939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.404265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.404294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.404648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.404678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.405085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.405116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.405482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.405511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.405841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.405870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.406241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.406270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.406520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.406548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.406921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.406952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.407205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.407233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.407650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.407680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.408040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.408074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.408410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.408445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.408802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.408832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.409087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.409122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.409521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.409550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.409926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.409958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.410318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.410347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.410705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.410735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.411078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.411109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.411459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.411490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.411832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.411862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.649 [2024-09-30 23:02:18.412266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.649 [2024-09-30 23:02:18.412296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.649 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.412644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.412673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.413023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.413055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.413420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.413450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.413814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.413843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.414208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.414239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.414611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.414639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.414997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.415027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.415367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.415398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.415757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.415787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.416040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.416071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.416454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.416482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.416817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.416847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.417297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.417328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.417774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.417804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.418149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.418179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.418523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.418553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.418922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.418954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.419353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.419383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.419745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.419775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.420148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.420179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.420535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.420564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.420933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.420966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.421256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.421285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.421651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.421680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.422039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.422049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.650 [2024-09-30 23:02:18.422069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.422101] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.650 [2024-09-30 23:02:18.422114] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.650 [2024-09-30 23:02:18.422121] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.650 [2024-09-30 23:02:18.422127] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.650 [2024-09-30 23:02:18.422313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:33:51.650 [2024-09-30 23:02:18.422451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.422482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.422454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:33:51.650 [2024-09-30 23:02:18.422629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:51.650 [2024-09-30 23:02:18.422629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:33:51.650 [2024-09-30 23:02:18.422869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.422943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.423198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.423230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.423605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.423635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.424005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.424036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.424285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.424313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.424639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.424669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.424920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.424952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.650 qpair failed and we were unable to recover it. 00:33:51.650 [2024-09-30 23:02:18.425187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.650 [2024-09-30 23:02:18.425220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.425594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.425623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.425992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.426024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.426429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.426458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.426838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.426870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.427118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.427149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.427411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.427440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.427844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.427873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.428133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.428164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.428447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.428477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.428647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.428677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.429077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.429108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.429493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.429522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.429778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.429808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.430057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.430088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.430448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.430477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.430843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.430874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.431125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.431156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.431495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.431525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.431877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.431919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.432142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.432174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.432546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.432584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.432943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.432974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.433227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.433256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.433621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.433652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.433887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.433944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.434362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.434392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.434813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.434844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.435097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.435129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.435495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.435526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.435740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.435769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.436131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.436163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.436434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.436465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.436822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.436858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.651 qpair failed and we were unable to recover it. 00:33:51.651 [2024-09-30 23:02:18.437226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.651 [2024-09-30 23:02:18.437260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.437635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.437665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.438049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.438081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.438447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.438478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.438843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.438872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.439252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.439281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.439657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.439686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.440050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.440082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.440424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.440455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.440824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.440856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.441110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.441139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.441507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.441539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.441888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.441934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.442294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.442325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.442690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.442721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.443125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.443156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.443520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.443549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.443912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.443951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.444298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.444329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.444697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.444727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.444946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.444976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.445259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.445289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.445646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.445677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.446047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.446078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.446449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.446480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.446830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.446869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.447227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.447261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.447377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.447405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.447792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.447822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.448233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.448264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.448522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.448551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.448913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.448943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.449299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.449330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.449683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.449715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.449974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.450008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.450396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.450427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.450659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.450689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.451063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.451093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.652 qpair failed and we were unable to recover it. 00:33:51.652 [2024-09-30 23:02:18.451339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.652 [2024-09-30 23:02:18.451370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.451643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.451680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.451921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.451954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.452283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.452314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.452684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.452713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.453076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.453107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.453480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.453511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.453884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.453930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.454170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.454199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.454583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.454612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.454989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.455021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.455384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.455415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.455765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.455795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.456009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.456041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.456404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.456435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.456813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.456845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.457068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.457099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.457480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.457512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.457891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.457934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.458289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.458319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.458551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.458580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.458953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.458985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.459333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.459362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.459698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.459728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.459943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.459972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.460182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.460212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.460583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.460615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.460867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.460925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.461304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.461336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.461693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.461725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.462072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.462102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.462337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.462366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.462616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.462646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.463006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.463036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.463418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.463447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.463677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.463706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.463955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.463984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.464240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.464268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.653 [2024-09-30 23:02:18.464418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.653 [2024-09-30 23:02:18.464449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.653 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.464678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.464709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.465052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.465082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.465432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.465469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.465837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.465867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.466219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.466250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.466649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.466678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.467048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.467078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.467332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.467360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.467594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.467622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.467746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.467777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.468141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.468172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.468553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.468582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.468801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.468830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.469077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.469108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.469334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.469364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.469731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.469761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.470128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.470159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.470525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.470554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.470833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.470861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.471224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.471254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.471538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.471567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.471930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.471961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.472363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.472392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.472644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.472672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.473033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.473063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.473410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.473441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.473683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.473712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.474073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.474103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.474461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.474491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.474871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.474911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.475117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.475145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.475527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.475555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.475919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.654 [2024-09-30 23:02:18.475949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.654 qpair failed and we were unable to recover it. 00:33:51.654 [2024-09-30 23:02:18.476316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.476344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.476728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.476758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.477129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.477161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.477288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.477317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.477725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.477755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.477998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.478029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.478418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.478446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.478689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.478720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.479103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.479135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.479445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.479483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.479757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.479786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.480146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.480177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.480523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.480554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.480914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.480945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.481320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.481350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.481708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.481737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.482114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.482143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.482369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.482398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.482802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.482833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.483050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.483080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.483313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.483341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.483712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.483741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.483952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.483982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.484209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.484239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.484610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.484637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.485089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.485118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.485467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.485497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.485742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.485771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.486133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.486163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.486528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.486557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.486926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.486956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.487313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.487342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.487580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.487610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.487964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.487995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.488233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.488262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.488649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.488678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.489067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.489098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.489541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.655 [2024-09-30 23:02:18.489570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.655 qpair failed and we were unable to recover it. 00:33:51.655 [2024-09-30 23:02:18.489791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.489819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.490230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.490261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.490611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.490641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.490876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.490926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.491163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.491191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.491412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.491440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.491803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.491831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.492115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.492146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.492516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.492544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.492918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.492948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.493173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.493201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.493430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.493466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.493827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.493856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.494107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.494140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.494377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.494405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.494799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.494828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.495197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.495227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.495606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.495636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.496025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.496055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.496441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.496470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.496843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.496872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.497260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.497290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.497704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.497732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.498086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.498124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.498359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.498387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.498768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.498797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.499030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.499060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.499445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.499473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.499566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.499592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.499951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.500073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.500512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.500550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.500800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.500830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.501326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.501435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.501820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.501854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.502228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.502259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.502644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.502672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.503050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.503081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.503282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.503313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.656 [2024-09-30 23:02:18.503623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.656 [2024-09-30 23:02:18.503655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.656 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.503989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.504020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.504250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.504278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.504710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.504738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.505123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.505152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.505516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.505546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.505919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.505949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.506215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.506243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.506356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.506383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.506627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.506661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.506876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.506929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.507334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.507363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.507739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.507767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.508007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.508043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.508455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.508484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.508684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.508712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.508838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.508867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.509247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.509277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.509647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.509675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.510049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.510080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.510296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.510324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.510697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.510725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.511086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.511116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.511334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.511363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.511584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.511612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.511891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.511938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.512324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.512354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.512727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.512757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.513124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.513155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.513520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.513548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.513921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.513950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.514195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.514223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.514557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.514587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.514952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.514982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.515368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.515396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.515743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.515771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.516127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.516157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.516494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.516524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.516733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.657 [2024-09-30 23:02:18.516761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.657 qpair failed and we were unable to recover it. 00:33:51.657 [2024-09-30 23:02:18.517172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.517204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.517568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.517597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.517817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.517846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.518270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.518300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.518663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.518692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.519133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.519163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.519507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.519536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.519745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.519774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.520198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.520227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.520530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.520560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.520952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.520983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.521354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.521383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.521742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.521772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.521990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.522020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.522290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.522324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.522669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.522698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.522925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.522954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.523193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.523222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.523582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.523611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.523964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.523994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.524164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.524194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.524432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.524460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.524836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.524864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.525125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.525155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.525528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.525557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.525794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.525823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.526069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.526099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.526322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.526350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.526605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.526634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.526883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.526923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.527297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.527325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.527693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.527720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.528085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.528114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.528476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.528505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.528619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.528648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.528986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.529015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.529391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.529421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.529791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.529821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.658 [2024-09-30 23:02:18.530210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.658 [2024-09-30 23:02:18.530240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.658 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.530611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.530641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.530938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.530967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.531218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.531247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.531619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.531649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.532014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.532043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.532262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.532290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.532660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.532690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.533061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.533092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.533473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.533501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.533868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.533918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.534210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.534239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.534594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.534624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.534996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.535027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.535401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.535429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.535800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.535829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.536200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.536234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.536452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.536481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.536864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.536904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.537169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.537199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.537587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.537617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.537985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.538035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.538406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.538435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.538668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.538696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.539025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.539055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.539417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.539447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.539544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.539571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.539861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.539972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.540256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.540288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.540510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.540541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.540922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.540955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.541314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.541344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.541586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.541616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.541973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.659 [2024-09-30 23:02:18.542004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.659 qpair failed and we were unable to recover it. 00:33:51.659 [2024-09-30 23:02:18.542230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.542259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.542675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.542704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.542914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.542944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.543306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.543335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.543683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.543712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.544094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.544125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.544369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.544397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.544643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.544671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.545071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.545101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.545474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.545503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.545856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.545884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.546291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.546321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.546701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.546731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.546956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.546986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.547368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.547397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.547630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.547659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.548038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.548068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.548324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.548356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.548716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.548747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.549136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.549167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.549371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.549399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.549772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.549800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.550034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.550071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.550439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.550468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.550863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.550892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.551163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.551192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.551572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.551601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.551840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.551871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.552296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.552329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.552684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.552714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.552957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.552988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.553365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.553395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.553757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.553785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.554166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.554196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.554421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.554450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.554662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.554690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.555051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.555081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.555313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.555344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.660 [2024-09-30 23:02:18.555705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.660 [2024-09-30 23:02:18.555734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.660 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.555989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.556021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.556259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.556288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.556636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.556666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.557044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.557073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.557446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.557475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.557846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.557876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.557983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.558011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.558343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.558373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.558755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.558784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.559164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.559193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.559560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.559591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.559965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.559996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.560364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.560393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.560606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.560635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.560911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.560941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.561298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.561326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.561628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.561658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.562084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.562115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.562492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.562522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.562889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.562931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.563199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.563231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.563591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.563619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.564083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.564113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.564336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.564373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.564757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.564786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.565141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.565173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.565411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.565440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.565825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.565853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.566082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.566112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.566479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.566508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.566950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.566981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.567285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.567313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.567673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.567702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.568071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.568102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.568470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.568500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.568720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.568747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.569006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.569035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.569422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.661 [2024-09-30 23:02:18.569452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.661 qpair failed and we were unable to recover it. 00:33:51.661 [2024-09-30 23:02:18.569811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.569840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.570110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.570140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.570499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.570528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.570772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.570804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.571033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.571064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.571296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.571323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.571537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.571565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.571945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.571975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.572322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.572351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.572456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.572487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.572839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.572868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.573253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.573283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.573657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.573687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.574045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.574076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.574461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.574490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.574753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.574785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.575162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.575191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.575413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.575442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.575811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.575840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.576014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.576044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.576283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.576312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.576657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.576686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.576918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.576947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.577232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.577260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.577604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.577636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.578002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.578038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.578402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.578431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.578808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.578836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.579191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.579221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.579576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.579606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.579876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.579915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.580279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.580309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.580658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.580688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.581062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.581091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.581462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.581491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.581868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.581905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.582129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.582157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.582534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.582562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.662 qpair failed and we were unable to recover it. 00:33:51.662 [2024-09-30 23:02:18.582930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.662 [2024-09-30 23:02:18.582960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.583194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.583223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.583595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.583624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.583995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.584026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.584274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.584306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.584669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.584700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.585164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.585194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.585398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.585425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.585787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.585816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.586056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.586086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.586336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.586367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.586742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.586773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.587119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.587149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.587499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.587528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.587906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.587937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.588307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.588336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.588554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.588583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.588956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.588987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.589349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.589380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.589754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.589782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.590217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.590247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.590415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.590444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.590823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.590852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.591070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.591100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.591468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.591496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.591868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.591913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.592269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.592297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.592651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.592681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.593039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.593069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.593445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.593475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.593844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.593873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.594086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.594116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.594527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.594556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.594770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.663 [2024-09-30 23:02:18.594798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.663 qpair failed and we were unable to recover it. 00:33:51.663 [2024-09-30 23:02:18.595160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.595190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.595559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.595588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.595937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.595967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.596342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.596371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.596592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.596623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.596991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.597021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.597362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.597391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.597758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.597787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.598148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.598178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.598551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.598580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.598933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.598962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.599325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.599362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.599692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.599721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.600099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.600129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.600586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.600614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.601062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.601092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.601455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.601483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.601709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.601738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.602075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.602105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.602351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.602382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.602746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.602781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.603151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.603182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.603575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.603605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.603853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.603881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.604296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.604325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.604704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.604733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.604993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.605023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.605393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.605422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.605793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.605821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.606191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.606220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.606590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.606619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.606976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.607007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.607374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.607402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.607767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.607795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.608170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.608201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.608412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.608443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.608672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.608701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.609100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.609130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.664 [2024-09-30 23:02:18.609496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.664 [2024-09-30 23:02:18.609526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.664 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.609903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.609934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.610187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.610216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.610564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.610594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.610967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.610997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.611216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.611244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.611651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.611681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.612034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.612065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.612459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.612488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.612769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.612798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.613022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.613052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.613436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.613465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.613830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.613860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.614223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.614253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.614476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.614505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.614891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.614928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.615209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.615238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.615612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.615641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.615858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.615886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.616137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.616167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.616260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.616286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.616935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.617051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.617511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.617564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.617923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.617957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.618329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.618360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.618617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.618648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.618914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.618946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.619316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.619345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.619600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.619628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.620161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.620264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.620722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.620759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.621018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.621049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.621292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.621321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.621544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.621573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.621821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.621850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.622237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.622268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.622652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.622683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.622920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.622950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.623306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.665 [2024-09-30 23:02:18.623335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.665 qpair failed and we were unable to recover it. 00:33:51.665 [2024-09-30 23:02:18.623597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.623629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.623856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.623885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.624311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.624341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.624500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.624530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.624919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.624951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.625348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.625380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.625777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.625807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.626152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.626182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.626552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.626580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.626948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.626978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.627362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.627391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.627804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.627834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.628218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.628249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.628495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.628524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.628760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.628789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.629164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.629196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.629412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.629442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.629815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.629844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.630219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.630248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.630614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.630643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.630865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.630892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.631138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.631168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.631555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.631585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.631835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.631874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.632107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.632137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.632487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.632517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.632886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.632925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.633203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.633231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.633448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.633477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.633721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.633750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.633998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.634028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.634391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.634420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.634788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.634817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.635196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.635225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.635592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.635624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.635998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.636027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.636406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.636435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.636812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.636841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.637198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.666 [2024-09-30 23:02:18.637227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.666 qpair failed and we were unable to recover it. 00:33:51.666 [2024-09-30 23:02:18.637595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.637627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.637993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.638031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.638418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.638448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.638804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.638834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.639191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.639220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.639589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.639617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.639983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.640014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.640387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.640416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.640643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.640671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.640879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.640915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.641275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.641303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.641588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.641618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.642084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.642114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.642491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.642519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.642793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.642821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.643177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.643206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.643575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.643603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.643994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.644025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.644408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.644436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.644694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.644726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.644955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.644985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.645363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.645391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.645742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.645771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.646124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.646155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.646372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.646407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.646691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.646720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.647077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.647107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.647283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.647311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.667 [2024-09-30 23:02:18.647716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.667 [2024-09-30 23:02:18.647745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.667 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.648109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.648142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.648512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.648540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.648922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.648952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.649207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.649236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.649630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.649659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.650006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.650036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.942 [2024-09-30 23:02:18.650384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.942 [2024-09-30 23:02:18.650413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.942 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.650773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.650802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.651036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.651066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.651472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.651501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.651913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.651944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.652170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.652200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.652565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.652595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.652933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.652962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.653198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.653227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.653593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.653622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.654003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.654034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.654412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.654441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.654834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.654862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.655112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.655142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.655565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.655594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.655845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.655876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.656104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.656134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.656497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.656525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.656986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.657017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.657272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.657301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.657686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.657716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.658077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.658109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.658489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.658517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.658886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.658923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.659268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.659298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.659534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.659562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.659939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.659968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.660355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.660386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.660760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.660789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.661013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.661052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.661289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.661320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.661718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.661747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.662098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.662129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.662372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.662400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.662775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.662806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.663173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.663203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.663663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.663693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.943 [2024-09-30 23:02:18.664054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.943 [2024-09-30 23:02:18.664086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.943 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.664470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.664499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.664845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.664876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.664973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.665001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.665398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.665427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.665677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.665709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.665965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.665997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.666240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.666269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.666370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.666398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.666689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.666718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.667120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.667150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.667514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.667544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.667823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.667852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.668244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.668274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.668646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.668677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.668951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.668981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.669343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.669373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.669468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.669495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Read completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 Write completed with error (sct=0, sc=8) 00:33:51.944 starting I/O failed 00:33:51.944 [2024-09-30 23:02:18.670419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:51.944 [2024-09-30 23:02:18.670777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.670826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.671252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.671361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.671690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.671738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.672112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.672145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.672515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.672545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.672921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.672952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.673206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.673235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.673583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.673619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.674031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.674063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.674292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.944 [2024-09-30 23:02:18.674321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.944 qpair failed and we were unable to recover it. 00:33:51.944 [2024-09-30 23:02:18.674686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.674716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.674946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.674975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.675269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.675298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.675547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.675577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.675936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.675967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.676194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.676224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.676441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.676471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.676823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.676852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.677310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.677340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.677716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.677745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.678203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.678234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.678474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.678511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.678766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.678803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.679179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.679210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.679570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.679600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.679819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.679849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.680236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.680267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.680643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.680675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.680918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.680953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.681246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.681276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.681623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.681660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.682000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.682031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.682392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.682424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.682643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.682672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.683131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.683164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.683533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.683564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.683987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.684019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.684468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.684498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.684719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.684747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.684994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.685033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.685414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.685444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.685802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.685832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.686216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.686247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.686600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.686630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.687014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.687046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.687438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.687471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.687714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.687748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.688099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.688130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.945 [2024-09-30 23:02:18.688475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.945 [2024-09-30 23:02:18.688511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.945 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.688753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.688784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.689017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.689047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.689457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.689486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.689871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.689912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.690124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.690155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.690541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.690572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.690955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.690986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.691360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.691390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.691766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.691797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.692154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.692186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.692407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.692436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.692822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.692854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.693215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.693247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.693639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.693668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.693915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.693945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.694358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.694387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.694663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.694692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.695047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.695078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.695448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.695478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.695839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.695867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.696311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.696341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.696700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.696729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.697074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.697106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.697487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.697518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.697914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.697946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.698310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.698340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.698715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.698751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.699228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.699267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.699519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.699555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.699952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.699983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.700348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.700377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.700753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.700782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.701131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.701161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.701379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.701408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.701790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.701820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.702145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.702175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.702534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.702563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.702936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.702967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.946 qpair failed and we were unable to recover it. 00:33:51.946 [2024-09-30 23:02:18.703352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.946 [2024-09-30 23:02:18.703380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.703758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.703788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.704162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.704193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.704406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.704435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.704796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.704825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.705197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.705227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.705594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.705625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.706043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.706075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.706309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.706337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.706610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.706638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.706851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.706879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.707263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.707293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.707639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.707669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.708025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.708056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.708279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.708306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.708683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.708712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.708925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.708956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.709194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.709231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.709575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.709604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.709976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.710006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.710387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.710416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.710790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.710821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.711041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.711071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.711195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.711227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.711579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.711606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.711841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.711874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.712267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.712298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.712661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.712691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.712919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.712949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.947 [2024-09-30 23:02:18.713207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.947 [2024-09-30 23:02:18.713237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.947 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.713608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.713637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.714022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.714053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.714405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.714435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.714804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.714836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.715227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.715257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.715621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.715650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.715864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.715903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.716230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.716260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.716481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.716510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.716902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.716934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.717296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.717326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.717700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.717728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.718089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.718119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.718510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.718539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.718935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.718966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.719357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.719387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.719632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.719662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.720028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.720058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.720428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.720457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.720808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.720837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.721219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.721249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.721615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.721644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.721833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.721861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.722255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.722286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.722644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.722674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.723071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.723101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.723321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.723355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.723708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.723739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.724150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.724180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.724551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.724579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.724796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.724826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.725049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.725079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.725443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.725471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.725850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.725880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.726287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.726317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.726664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.726694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.726915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.948 [2024-09-30 23:02:18.726946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.948 qpair failed and we were unable to recover it. 00:33:51.948 [2024-09-30 23:02:18.727168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.727196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.727410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.727438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.727803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.727831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.728203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.728233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.728606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.728636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.728877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.728918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.729115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.729143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.729524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.729552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.729925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.729955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.730335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.730364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.730739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.730768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.731108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.731139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.731515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.731543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.731680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.731707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.732060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.732089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.732470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.732498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.732862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.732905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.733276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.733306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.733679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.733708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.733997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.734026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.734244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.734273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.734495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.734522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.734741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.734769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.734978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.735009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.735373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.735403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.735772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.735802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.736145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.736175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.736418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.736446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.736789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.736819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.737199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.737244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.737521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.737553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.737915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.737946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.738041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.738069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.738420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.738449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.738690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.738717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.738807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.738833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.739534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.739567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.739927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.739960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.740346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.949 [2024-09-30 23:02:18.740374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.949 qpair failed and we were unable to recover it. 00:33:51.949 [2024-09-30 23:02:18.740775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.740803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.740946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.740975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.741191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.741220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.741628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.741656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.742026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.742056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.742421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.742452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.742816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.742844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.743100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.743138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.743342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.743374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.743602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.743630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.744037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.744069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.744279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.744310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.744677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.744707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.745067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.745097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.745467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.745496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.745866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.745903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.746262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.746291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.746533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.746561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.746926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.746957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.747342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.747372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.747758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.747787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.748012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.748042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.748428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.748457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.748829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.748859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.749119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.749148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.749516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.749546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.749912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.749943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.750320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.750349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.750697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.750727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.751091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.751121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.751354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.751382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.751754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.751782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.752151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.752182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.752540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.752568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.752935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.752966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.753186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.753216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.753578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.753605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.753982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.950 [2024-09-30 23:02:18.754011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.950 qpair failed and we were unable to recover it. 00:33:51.950 [2024-09-30 23:02:18.754387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.754416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.754771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.754799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.755194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.755223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.755591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.755620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.755852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.755881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.756120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.756150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.756250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.756278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.756627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.756663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.756929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.756959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.757321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.757351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.757718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.757747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.757991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.758021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.758240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.758277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.758643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.758671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.758921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.758950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.759310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.759338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.759589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.759617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.760024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.760055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.760461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.760491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.760744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.760774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.761024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.761054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.761440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.761469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.761832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.761863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.762005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.762033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.762423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.762456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.762681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.762709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.763088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.763121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.763502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.763532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.763900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.763932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.764310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.764339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.764706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.764735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.765093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.765123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.765354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.765382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.765763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.765791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.766148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.766184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.766538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.766568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.766932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.766962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.767359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.767388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.767743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.951 [2024-09-30 23:02:18.767773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.951 qpair failed and we were unable to recover it. 00:33:51.951 [2024-09-30 23:02:18.768130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.768162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.768382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.768411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.768777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.768806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.769219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.769251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.769459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.769487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.769848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.769876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.770236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.770266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.770633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.770662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.771025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.771055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.771426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.771457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.771825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.771857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.772238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.772268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.772530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.772562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.772918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.772949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.773190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.773219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.773602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.773632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.773974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.774003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.774227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.774255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.774621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.774649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.774867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.774903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.775267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.775295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.775505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.775533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.775904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.775942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.776201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.776230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.776588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.776617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.776990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.777021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.777376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.777404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.777630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.777658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.778025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.778056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.778426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.778456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.778690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.778719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.779077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.779107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.779455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.779485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.952 [2024-09-30 23:02:18.779856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.952 [2024-09-30 23:02:18.779885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.952 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.780239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.780268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.780650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.780678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.781031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.781061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.781432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.781460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.781689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.781717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.782078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.782107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.782515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.782544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.782920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.782949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.783312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.783341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.783596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.783624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.783990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.784019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.784398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.784427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.784802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.784831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.785216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.785248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.785621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.785649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.786033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.786063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.786430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.786458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.786822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.786852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.787237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.787274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.787538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.787566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.787720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.787757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.787851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.787880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.788216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.788245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.788612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.788640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.789020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.789051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.789459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.789489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.789873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.789909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.790277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.790305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.790421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.790451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.790848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.790877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.791119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.791148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.791457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.791486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.791862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.791891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.792371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.792400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.792750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.792780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.793155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.793186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.793553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.793582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.793968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.794001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.953 [2024-09-30 23:02:18.794243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.953 [2024-09-30 23:02:18.794272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.953 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.794420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.794449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.794819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.794847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.795222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.795252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.795629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.795656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.795868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.795904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.796154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.796183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.796576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.796605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.796918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.796947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.797184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.797213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.797461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.797489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.797850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.797881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.798262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.798292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.798500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.798528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.798972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.799002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.799367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.799396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.799767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.799795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.800146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.800176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.800384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.800420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.800777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.800804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.801159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.801191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.801573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.801601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.801971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.801999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.802379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.802408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.802782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.802811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.803040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.803072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.803459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.803487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.803712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.803739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.804134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.804163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.804546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.804574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.804783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.804811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.805023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.805053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.805494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.805524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.805842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.805870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.806261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.806290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.806663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.806692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.806954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.806984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.807409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.807438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.807667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.807695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.954 [2024-09-30 23:02:18.808077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.954 [2024-09-30 23:02:18.808106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.954 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.808338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.808366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.808572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.808601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.808977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.809006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.809251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.809279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.809509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.809537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.809902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.809939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.810363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.810392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.810759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.810787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.811049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.811077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.811438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.811466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.811834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.811863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.812282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.812312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.812681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.812710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.813096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.813127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.813379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.813407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.813768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.813797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.814023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.814052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.814430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.814458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.814662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.814689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.815063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.815094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.815461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.815490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.815739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.815768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.816157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.816186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.816557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.816584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.816961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.816991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.817235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.817262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.817498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.817531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.817882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.817921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.818146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.818174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.818543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.818572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.818952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.818982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.819432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.819461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.819828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.819857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.820249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.820279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.820496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.820523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.820772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.820802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.821168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.821200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.821580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.955 [2024-09-30 23:02:18.821610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.955 qpair failed and we were unable to recover it. 00:33:51.955 [2024-09-30 23:02:18.822011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.822041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.822316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.822344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.822582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.822611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.822965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.822994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.823375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.823403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.823772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.823801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.824172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.824200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.824551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.824580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.824943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.824975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.825340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.825370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.825744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.825772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.826125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.826156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.826371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.826399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.826764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.826791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.827039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.827069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.827301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.827329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.827691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.827719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.828076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.828107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.828321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.828349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.828726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.828753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.829127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.829157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.829400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.829428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.829792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.829822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.830200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.830229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.830621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.830648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.831071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.831101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.831467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.831496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.831867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.831903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.832268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.832296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.832675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.832704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.832962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.832991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.833340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.833369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.833734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.833763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.834130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.834160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.956 [2024-09-30 23:02:18.834544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.956 [2024-09-30 23:02:18.834575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.956 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.834938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.834977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.835326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.835355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.835744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.835772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.836159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.836189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.836540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.836569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.836941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.836971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.837198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.837227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.837469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.837503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.837861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.837890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.838297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.838327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.838709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.838739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.839123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.839153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.839497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.839526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.839933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.839963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.840399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.840429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.840783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.840813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.840914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.840943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.841537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.841643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.841859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.841923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.842037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.842068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.842415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.842445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.842796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.842827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.842947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.842976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.843325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.843355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.843710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.843740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.844077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.844107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.844351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.844381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.844783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.844814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.845189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.845220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.845439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.845467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.845863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.845905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.846317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.846346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.846702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.846732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.846979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.847012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.847257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.847285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.847669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.847699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.848079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.848109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.848344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.848372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.957 [2024-09-30 23:02:18.848532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.957 [2024-09-30 23:02:18.848559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.957 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.848978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.849008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.849412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.849442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.849782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.849812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.850169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.850198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.850573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.850603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.850963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.850992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.851252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.851280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.851495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.851523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.851883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.851930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.852136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.852164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.852545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.852573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.852951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.852988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.853217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.853246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.853629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.853658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.854035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.854066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.854441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.854471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.854825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.854856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.855226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.855256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.855468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.855497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.855862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.855892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.856122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.856151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.856400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.856428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.856679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.856710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.857080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.857111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.857368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.857398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.857858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.857886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.858263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.858292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.858660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.858689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.858913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.858950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.859205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.859234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.859385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.859415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.859765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.859794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.860151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.860183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.860554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.860585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.860837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.860870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.861126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.861155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.861536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.861565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.958 [2024-09-30 23:02:18.861796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.958 [2024-09-30 23:02:18.861826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.958 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.861976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.862007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.862361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.862391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.862612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.862642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.862900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.862930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.863293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.863323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.863698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.863728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.864070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.864099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.864485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.864514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.864881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.864919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.865287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.865315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.865557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.865588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.865968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.865999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.866390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.866418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.866639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.866667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.866914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.866943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.867309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.867339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.867566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.867594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.867971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.868003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.868226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.868256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.868586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.868616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.868979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.869010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.869383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.869413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.869744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.869774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.870122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.870154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.870519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.870549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.870758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.870787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.871157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.871186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.871438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.871469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.871847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.871876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.872252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.872282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.872501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.872537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.872914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.872945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.873166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.873195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.873566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.873595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.873967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.873998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.874369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.874398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.874740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.874770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.875006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.959 [2024-09-30 23:02:18.875036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.959 qpair failed and we were unable to recover it. 00:33:51.959 [2024-09-30 23:02:18.875263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.875294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.875633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.875663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.876026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.876056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.876436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.876465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.876821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.876850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.877254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.877285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.877533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.877562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.877939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.877971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.878232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.878260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.878620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.878649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.879026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.879056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.879269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.879298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.879506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.879535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.879738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.879765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.880130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.880161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.880379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.880407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.880499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.880526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.881054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.881161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.881615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.881652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.882176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.882279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.882624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.882661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.882761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.882789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.883047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.883079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.883434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.883464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.883809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.883847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.884159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.884189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.884551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.884583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.884964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.884994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.885394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.885423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.885783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.885814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.886271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.886302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.886511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.886541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.886913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.886955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.887053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.887080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.887469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.887497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.887717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.887745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.888162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.888192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.888552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.888581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.960 [2024-09-30 23:02:18.888833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.960 [2024-09-30 23:02:18.888861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.960 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.889108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.889138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.889513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.889542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.889936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.889967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.890181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.890209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.890600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.890629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.890870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.890909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.891186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.891215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.891607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.891637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.892017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.892049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.892283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.892312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.892694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.892724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.893083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.893113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.893461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.893490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.893866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.893905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.894187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.894221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.894587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.894616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.894982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.895013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.895381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.895409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.895768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.895798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.896154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.896185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.896553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.896582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.896833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.896864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.897255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.897286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.897597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.897628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.897843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.897872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.898125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.898154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.898394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.898422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.898660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.898688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.898937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.898969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.899321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.899350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.899727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.899756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.900136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.961 [2024-09-30 23:02:18.900165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.961 qpair failed and we were unable to recover it. 00:33:51.961 [2024-09-30 23:02:18.900392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.900421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.900640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.900675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.901037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.901067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.901437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.901467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.901846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.901875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.902252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.902282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.902636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.902667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.902888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.902927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.903295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.903324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.903685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.903714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.904079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.904109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.904474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.904503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.904849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.904880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.905247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.905276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.905545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.905573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.905924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.905954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.906295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.906324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.906691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.906720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.907078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.907108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.907498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.907528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.907888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.907935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.908283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.908312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.908684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.908715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.908944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.908976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.909191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.909219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.909602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.909633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.909995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.910027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.910418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.910448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.910705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.910736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.910954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.910984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.911333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.911362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.911728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.911758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.912122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.912154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.912519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.912548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.912936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.912968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.913369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.913399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.962 [2024-09-30 23:02:18.913774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.962 [2024-09-30 23:02:18.913804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.962 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.914180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.914219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.914577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.914606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.914835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.914865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.915146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.915177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.915636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.915672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.916019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.916049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.916390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.916419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.916797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.916826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.917081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.917114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.917209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.917237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.917554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.917582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.917803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.917831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.918199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.918230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.918582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.918611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.918965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.918997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.919372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.919401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.919779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.919808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.920193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.920222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.920474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.920504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.920870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.920910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.921297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.921329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.921560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.921590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.921959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.921990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.922225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.922254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.922626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.922656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.923005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.923036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.923390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.923423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.923770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.923800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.924147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.924179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.924325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.924355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.924606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.924635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.925035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.925068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.925405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.925434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.925700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.925732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.926123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.926154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.926528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.926559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.963 [2024-09-30 23:02:18.926956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.963 [2024-09-30 23:02:18.926987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.963 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.927231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.927262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.927520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.927548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.927936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.927967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.928350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.928380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.928595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.928623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.928983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.929014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.929219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.929247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.929639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.929676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.930039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.930069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.930338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.930367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.930608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.930637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.930862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.930926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.931293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.931323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.931674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.931703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.932070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.932100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.932347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.932375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.932751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.932782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.933151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.933182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.933416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.933445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.933558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.933589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.933931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.933961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.934344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.934376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.934742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.934772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.935144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.935176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.935390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.935419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.935669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.935697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.935793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.935820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.936172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.936201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.936408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.936439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.936633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.936663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.937036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.937066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.937432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.937461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.937838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.937869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.938070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.938100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.938496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.938528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.938890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.964 [2024-09-30 23:02:18.938935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.964 qpair failed and we were unable to recover it. 00:33:51.964 [2024-09-30 23:02:18.939222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.939253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.939671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.939699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.940066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.940096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.940447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.940477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.940826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.940856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.941249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.941280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.941498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.941527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.941787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.941817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.942214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.942247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.942495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.942524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.942751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.942785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.943060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.943097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.943472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.943503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.943730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.943758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.943967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.943997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.944339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.944368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.944745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.944775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.945132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.965 [2024-09-30 23:02:18.945163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:51.965 qpair failed and we were unable to recover it. 00:33:51.965 [2024-09-30 23:02:18.945525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.945554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.945930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.945963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.946349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.946379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.946477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.946507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.946888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.946930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.947305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.947334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.231 [2024-09-30 23:02:18.947710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.231 [2024-09-30 23:02:18.947741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.231 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.948090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.948122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.948471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.948502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.948848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.948879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.949278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.949307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.949698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.949727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.950173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.950204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.950559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.950587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.950822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.950853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.951221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.951254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.951627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.951658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.951870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.951908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.952158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.952191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.952582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.952611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.953052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.953084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.953332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.953361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.953691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.953721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.954078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.954109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.954485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.954518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.954887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.954929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.955309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.955339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.955704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.955732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.956084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.956113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.956558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.956589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.956942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.956972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.957354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.957384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.957612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.957643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.957859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.957904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.958262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.958292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.958670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.958702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.958880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.958933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.959147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.959176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.959539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.959570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.959940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.959970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.960196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.232 [2024-09-30 23:02:18.960225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.232 qpair failed and we were unable to recover it. 00:33:52.232 [2024-09-30 23:02:18.960599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.960627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.960992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.961024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.961386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.961417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.961629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.961657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.961920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.961949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.962298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.962327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.962718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.962747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.963115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.963147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.963524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.963554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.963957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.963988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.964351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.964435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.964681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.964711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.964924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.964955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.965320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.965349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.965469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.965499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.965656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.965684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.966046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.966078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.966451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.966483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.966855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.966885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.967152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.967185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.967422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.967454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.967828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.967859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.968246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.968278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.968654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.968686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.968915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.968946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.969322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.969351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.969560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.969590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.969952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.969985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.970218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.970247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.970635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.970664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.970882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.970934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.971188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.971218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.971437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.971472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.971856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.233 [2024-09-30 23:02:18.971885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.233 qpair failed and we were unable to recover it. 00:33:52.233 [2024-09-30 23:02:18.972112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.972142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.972546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.972575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.972957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.972987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.973238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.973268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.973480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.973509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.973867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.973905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.974151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.974181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.974560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.974591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.974832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.974865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.974978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.975007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.975367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.975398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.975649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.975679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.976035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.976067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.976254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.976283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.976670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.976699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.976953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.976984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.977317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.977346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.977708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.977737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.978126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.978156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.978257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.978286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.978608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.978637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.979024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.979055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.979417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.979449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.979816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.979846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.980099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.980130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.980509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.980540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.980808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.980838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.981241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.981274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.981653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.981683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.982045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.982076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.982450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.982480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.982854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.982884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.983141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.983172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.983560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.983589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.983798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.983829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.984174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.984205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.984602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.234 [2024-09-30 23:02:18.984632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.234 qpair failed and we were unable to recover it. 00:33:52.234 [2024-09-30 23:02:18.984991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.985021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.985388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.985424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.985785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.985816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.986194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.986226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.986599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.986629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.987014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.987043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.987379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.987409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.987754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.987785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.988137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.988167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.988525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.988554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.988930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.988959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.989163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.989192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.989549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.989577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.989930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.989961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.990316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.990346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.990689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.990719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.990925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.990955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.991251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.991280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.991532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.991562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.991809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.991839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.992104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.992138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.992509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.992538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.992764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.992793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.993125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.993155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.993258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.993286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.993529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.993557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.993784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.993816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.994164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.994196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.994564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.994596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.994835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.994865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.995250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.995281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.995623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.995660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.996013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.996044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.996286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.996315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.996695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.235 [2024-09-30 23:02:18.996726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.235 qpair failed and we were unable to recover it. 00:33:52.235 [2024-09-30 23:02:18.997103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.997133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.997515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.997545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.997918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.997949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.998358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.998387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.998729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.998759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.998978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.999009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.999119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.999156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.999498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.999528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:18.999904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:18.999936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.000174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.000203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.000562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.000591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.000791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.000821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.001064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.001094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.001457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.001486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.001867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.001905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.002253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.002283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.002507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.002535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.002763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.002792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.003169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.003200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.003563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.003593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.003969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.004000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.004407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.004437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.004646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.004675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.004960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.004990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.005364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.005393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.005758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.005788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.006144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.006173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.006401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.236 [2024-09-30 23:02:19.006428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.236 qpair failed and we were unable to recover it. 00:33:52.236 [2024-09-30 23:02:19.006660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.006692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.006918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.006949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.007309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.007338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.007614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.007643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.007848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.007877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.008134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.008164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.008434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.008464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.008818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.008847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.009065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.009096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.009469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.009497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.009859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.009888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.010266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.010295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.010513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.010540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.010925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.010956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.011339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.011368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.011625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.011653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.011995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.012027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.012234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.012262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.012460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.012495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.012683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.012710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.012810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.012837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.013227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.013257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.013635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.013664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.014014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.014045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.014168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.014197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.014541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.014570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.014678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.014709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.015087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.015116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.015325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.015353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.015508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.015537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.015892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.015932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.016150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.016179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.016417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.016447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.016805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.016833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.017079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.017108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.017346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.237 [2024-09-30 23:02:19.017375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.237 qpair failed and we were unable to recover it. 00:33:52.237 [2024-09-30 23:02:19.017605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.017636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.017997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.018026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.018401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.018430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.018801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.018829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.019195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.019225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.019464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.019495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.019867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.019904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.020269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.020297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.020673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.020701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.021054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.021085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.021366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.021395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.021751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.021780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.022042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.022072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.022372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.022402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.022616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.022647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.023027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.023057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.023270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.023299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.023543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.023571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.023942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.023971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.024209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.024237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.024620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.024648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.024859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.024887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.025261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.025291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.025508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.025538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.025914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.025944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.026188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.026217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.026487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.026515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.026938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.026968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.027190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.027218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.027644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.027672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.027885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.027924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.028123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.028151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.028570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.028599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.028721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.028748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.029112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.238 [2024-09-30 23:02:19.029142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.238 qpair failed and we were unable to recover it. 00:33:52.238 [2024-09-30 23:02:19.029585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.029613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.029994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.030024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.030155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.030184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.030567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.030596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.030963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.030993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.031219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.031247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.031619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.031647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.031861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.031890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.032296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.032324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.032701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.032729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.032956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.032984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.033267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.033296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.033746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.033774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.034119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.034149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.034526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.034560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.034930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.034962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.035332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.035360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.035601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.035629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.036041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.036071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.036281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.036309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.036681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.036710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.037094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.037124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.037485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.037514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.037614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.037642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.037721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c00f0 (9): Bad file descriptor 00:33:52.239 [2024-09-30 23:02:19.038479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.038589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.039035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.039076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.039451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.039482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.039775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.039810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.040057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.040090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.040459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.040489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.040874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.040914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.041282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.041391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.041786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.041818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.042069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.042100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.042488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.042518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.042885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.042933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.043203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.239 [2024-09-30 23:02:19.043230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.239 qpair failed and we were unable to recover it. 00:33:52.239 [2024-09-30 23:02:19.043600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.043630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.240 [2024-09-30 23:02:19.044001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:52.240 [2024-09-30 23:02:19.044033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:52.240 [2024-09-30 23:02:19.044405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.044443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.240 [2024-09-30 23:02:19.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.044686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.240 [2024-09-30 23:02:19.044913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.044945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.045287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.045316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.045679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.045707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.046081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.046111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.046346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.046376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.046728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.046758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.047109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.047138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.047357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.047386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.047600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.047630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.047844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.047872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.048235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.048268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.048375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.048401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36dc000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.048975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.049079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.049574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.049613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.050002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.050058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.050447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.050478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.050734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.050763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.051021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.051053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.051460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.051490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.051717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.051746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.052122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.052152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.052511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.052541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.052787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.052817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.053169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.053203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.053484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.053514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.053882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.053929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.054277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.054306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.054687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.054717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.054957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.054994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.055371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.055401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.055611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.055640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.055891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.240 [2024-09-30 23:02:19.055931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.240 qpair failed and we were unable to recover it. 00:33:52.240 [2024-09-30 23:02:19.056319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.056350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.056616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.056647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.056885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.056925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.057283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.057314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.057655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.057685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.057786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.057836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.058271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.058304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.058670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.058700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.059041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.059072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.059337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.059367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.059590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.059619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.059716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.059744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.060107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.060138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.060512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.060541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.060958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.060987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.061437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.061466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.061673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.061703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.062078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.062108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.062336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.062364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.062616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.062644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.063034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.063065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.063323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.063351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.063690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.063718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.064086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.064117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.064369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.064401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.064804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.064834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.065064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.065095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.065429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.065460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.065836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.065866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.066075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.066107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.066476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.066505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.066878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.066921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.067049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.067080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.241 [2024-09-30 23:02:19.067468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.241 [2024-09-30 23:02:19.067498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.241 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.067865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.067904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.068248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.068278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.068514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.068542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.068872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.068910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.069273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.069302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.069511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.069539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.069933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.069965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.070375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.070404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.070764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.070793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.071145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.071176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.071563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.071592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.071961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.071998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.072372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.072400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.072615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.072647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.073032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.073061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.073302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.073330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.073735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.073764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.074134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.074165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.074530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.074560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.074995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.075024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.075374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.075404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.075782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.075811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.076248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.076278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.076524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.076555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.076776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.076807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.077178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.077208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.077641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.077669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.078004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.078036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.078409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.078438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.078607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.078634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.078855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.078884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.078990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.079022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.079400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.079430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.079678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.079706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.079913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.079943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.080038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.080066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.080626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.080733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.080961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.242 [2024-09-30 23:02:19.081003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36e4000b90 with addr=10.0.0.2, port=4420 00:33:52.242 qpair failed and we were unable to recover it. 00:33:52.242 [2024-09-30 23:02:19.081394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.081429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.081806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.081835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.082212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.082244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.082646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.082674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.083048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.083080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.083297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.083326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.083698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.083727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.083949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.083978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.084214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.084242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.084644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.243 [2024-09-30 23:02:19.084674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.243 [2024-09-30 23:02:19.085125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.085157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.243 [2024-09-30 23:02:19.085499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.085530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.243 [2024-09-30 23:02:19.085792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.085824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.086053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.086083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.086458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.086486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.086711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.086739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.086962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.086995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.087244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.087272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.087521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.087553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.087908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.087938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.088268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.088297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.088669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.088697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.089053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.089089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.089344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.089376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.089601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.089628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.089865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.089905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.090299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.090327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.090671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.090701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.090813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.090843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.091461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.091491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.091741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.091770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.092118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.092149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.092500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.092530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.092952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.092982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.093382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.093410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.093651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.093679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.243 [2024-09-30 23:02:19.093952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.243 [2024-09-30 23:02:19.093981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.243 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.094362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.094390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.094612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.094647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.095017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.095047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.095420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.095448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.095823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.095851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.096087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.096117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.096319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.096347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.096579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.096609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.096984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.097013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.097386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.097416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.097783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.097812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.098154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.098183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.098282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.098310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.098687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.098715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.098819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.098846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.099240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.099270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.099625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.099656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.099869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.099904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.100244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.100274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.100530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.100561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.100916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.100945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.101330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.101359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.101595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.101623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.102006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.102036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.102413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.102442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.102811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.102839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.103097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.103130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.103473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.103503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.103886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.103923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.104142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.104170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.104552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.104581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.104937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.104966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.105356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.105385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.105755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.105784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.106006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.106036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.106263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.106291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.106390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.106417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.106768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.244 [2024-09-30 23:02:19.106798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.244 qpair failed and we were unable to recover it. 00:33:52.244 [2024-09-30 23:02:19.107152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.107181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.107533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.107562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.107931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.107960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.108331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.108366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 Malloc0 00:33:52.245 [2024-09-30 23:02:19.108740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.108771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.109123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.109153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.245 [2024-09-30 23:02:19.109450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.109479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.109771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.109801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:52.245 [2024-09-30 23:02:19.110062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.110097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.245 [2024-09-30 23:02:19.110272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.110301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.245 [2024-09-30 23:02:19.110692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.110722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.111098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.111129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.111492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.111520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.111911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.111941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.112311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.112339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.112709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.112737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.113065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.113094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.113354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.113383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.113608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.113638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.113993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.114023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.114271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.114300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.114664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.114692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.114917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.114949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.115348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.115377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.115647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.115677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.116040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.116069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.116114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.245 [2024-09-30 23:02:19.116485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.116515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.116737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.116765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.117168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.117197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.117340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.117368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.117782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.117810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.118146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.118175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.118268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.118298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.118859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.118992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.245 qpair failed and we were unable to recover it. 00:33:52.245 [2024-09-30 23:02:19.119399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.245 [2024-09-30 23:02:19.119437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.119821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.119852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.120342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.120450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.120775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.120812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.121186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.121220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.121391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.121422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.121799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.121829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.122211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.122255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.122497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.122525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.122906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.122938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.123277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.123307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.123536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.123565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.123704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.123733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.123963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.123993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.124247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.124275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.124705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.124735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.124976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.125007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.246 [2024-09-30 23:02:19.125424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.125456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.246 [2024-09-30 23:02:19.125827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.125861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.246 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.246 [2024-09-30 23:02:19.126275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.126308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.126678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.126706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.127064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.127095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.127487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.127515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.127747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.127775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.128084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.128115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.128335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.128363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.128751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.128780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.129137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.129168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.129520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.129549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.129940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.129970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.130313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.130349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.130592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.130621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.130991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.131021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.131266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.131295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.131642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.131670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.246 qpair failed and we were unable to recover it. 00:33:52.246 [2024-09-30 23:02:19.131938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.246 [2024-09-30 23:02:19.131967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.132354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.132383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.132693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.132721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.132960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.132990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.133282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.133310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.133671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.133700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.133931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.133960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.134223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.134252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.134483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.134511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.134851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.134880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.135166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.135196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2550 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.135485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.135576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.136171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.136275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.136577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.136616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.136989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.137020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.247 [2024-09-30 23:02:19.137408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.137440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.247 [2024-09-30 23:02:19.137659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.137690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.247 [2024-09-30 23:02:19.137928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.137960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.247 [2024-09-30 23:02:19.138333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.138365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.138737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.138766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.139118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.139149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.139394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.139423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.139810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.139839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.140213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.140246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.140498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.140532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.140762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.140792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.141209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.141239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.141603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.141634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.141979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.142008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.142389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.142419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.142639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.142667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.142958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.142988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.143382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.143410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.143791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.143819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.144044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.144073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.144304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.144334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.144709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.144737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.247 qpair failed and we were unable to recover it. 00:33:52.247 [2024-09-30 23:02:19.145124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.247 [2024-09-30 23:02:19.145154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.145377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.145405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.145617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.145645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.146043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.146073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.146432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.146460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.146663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.146690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.146998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.147027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.147405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.147435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.147813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.147841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.148221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.148251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.148657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.148685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.148783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.148810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.149144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.149186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.248 [2024-09-30 23:02:19.149580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.149610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.149843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.149872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.248 [2024-09-30 23:02:19.150215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.150249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.248 [2024-09-30 23:02:19.150490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.150520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.248 [2024-09-30 23:02:19.150779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.150809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.151175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.151205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.151439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.151467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.151823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.151852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.152106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.152136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.152494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.152524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.152882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.152921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.153213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.153243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.153588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.153625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.154001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.154030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.154418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.154448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.154654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.154682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.155060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.155090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.155453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.155481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.155589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.155618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.155949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.155979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.156339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.248 [2024-09-30 23:02:19.156368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36d8000b90 with addr=10.0.0.2, port=4420 00:33:52.248 qpair failed and we were unable to recover it. 00:33:52.248 [2024-09-30 23:02:19.156516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.248 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.248 [2024-09-30 23:02:19.167391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.248 [2024-09-30 23:02:19.167553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.167613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.167637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.167657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.167710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.249 23:02:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 899485 00:33:52.249 [2024-09-30 23:02:19.177270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.177363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.177393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.177408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.177422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.177453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.187264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.187347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.187369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.187379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.187390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.187413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.197294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.197376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.197394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.197401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.197411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.197429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.207288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.207387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.207404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.207417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.207424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.207440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.217198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.217266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.217282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.217290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.217296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.217312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.227231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.227295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.227311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.227318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.227325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.227341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.249 [2024-09-30 23:02:19.237320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.249 [2024-09-30 23:02:19.237433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.249 [2024-09-30 23:02:19.237450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.249 [2024-09-30 23:02:19.237457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.249 [2024-09-30 23:02:19.237464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.249 [2024-09-30 23:02:19.237480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.249 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.247387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.247462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.247478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.247485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.247492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.247508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.257370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.257432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.257448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.257455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.257462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.257477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.267378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.267433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.267448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.267456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.267462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.267478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.277435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.277534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.277550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.277558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.277564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.277581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.287441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.287524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.287539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.287546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.287552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.287568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.297460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.297527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.297548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.297556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.297562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.297578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.307479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.307540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.307556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.307563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.307570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.307585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.317506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.317575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.317592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.317599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.317607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.317623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.327444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.327521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.327537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.327545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.327551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.327567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.337561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.337653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.337669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.337677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.337683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.337704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.347618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.347684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.347718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.347728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.347736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.347758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.357633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.357724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.357757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.357766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.512 [2024-09-30 23:02:19.357774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.512 [2024-09-30 23:02:19.357797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.512 qpair failed and we were unable to recover it. 00:33:52.512 [2024-09-30 23:02:19.367584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.512 [2024-09-30 23:02:19.367660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.512 [2024-09-30 23:02:19.367693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.512 [2024-09-30 23:02:19.367703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.367710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.367732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.377664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.377725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.377743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.377751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.377757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.377774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.387585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.387645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.387668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.387675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.387682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.387698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.397649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.397718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.397734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.397741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.397748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.397764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.407826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.407907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.407923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.407931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.407938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.407954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.417804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.417858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.417873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.417881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.417887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.417910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.427727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.427786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.427805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.427813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.427819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.427843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.438014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.438116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.438133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.438141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.438147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.438165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.447998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.448078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.448093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.448101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.448107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.448123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.458000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.458058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.458074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.458081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.458087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.458104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.468007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.468065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.468081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.468088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.468094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.468110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.478031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.478108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.478124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.478131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.478137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.478153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.487956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.488031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.488046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.488053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.488060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.488075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.498039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.513 [2024-09-30 23:02:19.498153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.513 [2024-09-30 23:02:19.498168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.513 [2024-09-30 23:02:19.498175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.513 [2024-09-30 23:02:19.498182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.513 [2024-09-30 23:02:19.498197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.513 qpair failed and we were unable to recover it. 00:33:52.513 [2024-09-30 23:02:19.508093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.514 [2024-09-30 23:02:19.508157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.514 [2024-09-30 23:02:19.508173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.514 [2024-09-30 23:02:19.508180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.514 [2024-09-30 23:02:19.508186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.514 [2024-09-30 23:02:19.508201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.514 qpair failed and we were unable to recover it. 00:33:52.514 [2024-09-30 23:02:19.518145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.514 [2024-09-30 23:02:19.518224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.514 [2024-09-30 23:02:19.518239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.514 [2024-09-30 23:02:19.518246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.514 [2024-09-30 23:02:19.518258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.514 [2024-09-30 23:02:19.518274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.514 qpair failed and we were unable to recover it. 00:33:52.776 [2024-09-30 23:02:19.528091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.528171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.528188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.528196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.528202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.528225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.538202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.538269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.538286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.538293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.538299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.538315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.548216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.548324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.548340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.548347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.548353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.548369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.558257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.558326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.558341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.558348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.558355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.558370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.568345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.568419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.568435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.568442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.568449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.568464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.578287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.578383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.578398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.578406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.578413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.578429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.588329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.588395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.588411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.588418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.588425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.588440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.598386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.598451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.598467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.598474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.598480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.598496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.608462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.608543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.608558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.608570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.608577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.608593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.618424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.618504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.618521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.618528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.618536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.618552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.628486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.628546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.628562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.628570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.628576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.628591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.638451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.638526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.638541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.638548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.638555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.638570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.648566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.648645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.648678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.648687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.777 [2024-09-30 23:02:19.648694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.777 [2024-09-30 23:02:19.648716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.777 qpair failed and we were unable to recover it. 00:33:52.777 [2024-09-30 23:02:19.658549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.777 [2024-09-30 23:02:19.658612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.777 [2024-09-30 23:02:19.658646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.777 [2024-09-30 23:02:19.658656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.658663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.658686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.668576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.668645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.668678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.668688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.668695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.668717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.678593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.678707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.678727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.678734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.678741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.678758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.688686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.688760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.688775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.688783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.688789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.688805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.698696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.698793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.698809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.698822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.698829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.698845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.708685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.708741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.708758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.708765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.708771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.708787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.718736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.718805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.718821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.718829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.718835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.718851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.728819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.728904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.728920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.728928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.728934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.728950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.738685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.738742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.738758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.738765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.738771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.738787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.748717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.748798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.748814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.748821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.748828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.748844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.758869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.758950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.758966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.758973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.758980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.758995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.768788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.768854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.768871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.768878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.768884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.768909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.778937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.779029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.779046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.779054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.779060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.779077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:52.778 [2024-09-30 23:02:19.788941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.778 [2024-09-30 23:02:19.789007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.778 [2024-09-30 23:02:19.789027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.778 [2024-09-30 23:02:19.789035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.778 [2024-09-30 23:02:19.789041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:52.778 [2024-09-30 23:02:19.789057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.778 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.798982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.799061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.799077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.799085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.799092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.799107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.809033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.809113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.809128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.809135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.809141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.809157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.819018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.819113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.819130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.819137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.819143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.819159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.829079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.829149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.829164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.829171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.829177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.829199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.838991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.839094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.839110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.839118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.839124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.839140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.849232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.849310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.849326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.849333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.849339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.849355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.859171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.859234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.859249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.859256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.859263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.859278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.869201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.869267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.869282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.869289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.869296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.869311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.879230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.879302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.879324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.879331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.879338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.879356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.889311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.889380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.889396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.889404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.889410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.889425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.899190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.899257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.899273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.899280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.899286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.899302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.909347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.909414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.909429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.909436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.909443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.909458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.042 qpair failed and we were unable to recover it. 00:33:53.042 [2024-09-30 23:02:19.919397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.042 [2024-09-30 23:02:19.919466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.042 [2024-09-30 23:02:19.919481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.042 [2024-09-30 23:02:19.919488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.042 [2024-09-30 23:02:19.919495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.042 [2024-09-30 23:02:19.919516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.929396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.929502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.929517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.929524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.929531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.929546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.939349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.939415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.939431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.939438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.939445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.939460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.949388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.949456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.949470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.949477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.949484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.949499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.959459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.959528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.959543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.959550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.959556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.959572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.969520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.969585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.969604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.969612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.969618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.969634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.979500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.979564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.979581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.979588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.979594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.979610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.989543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.989604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.989619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.989626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.989632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.989647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:19.999599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:19.999667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:19.999682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:19.999690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:19.999696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:19.999711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:20.009594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:20.009666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:20.009690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:20.009697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:20.009710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:20.009729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:20.019656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:20.019729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:20.019757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:20.019765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:20.019773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:20.019794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:20.029659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:20.029727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:20.029744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:20.029752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:20.029759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:20.029776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:20.039667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:20.039783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:20.039833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:20.039846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:20.039856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:20.039886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.043 [2024-09-30 23:02:20.049716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.043 [2024-09-30 23:02:20.049812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.043 [2024-09-30 23:02:20.049831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.043 [2024-09-30 23:02:20.049839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.043 [2024-09-30 23:02:20.049846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.043 [2024-09-30 23:02:20.049865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.043 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.059652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.059729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.059747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.059755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.059762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.059779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.069699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.069774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.069790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.069798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.069804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.069822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.079830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.079903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.079928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.079936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.079943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.079964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.089848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.089926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.089941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.089949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.089956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.089973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.099778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.099854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.099869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.099876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.099889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.099911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.109985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.110047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.110063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.110071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.110077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.110094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.119840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.119913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.119929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.119937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.119944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.119960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.130010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.130089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.130104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.130112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.130119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.130135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.140068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.140140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.140156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.307 [2024-09-30 23:02:20.140163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.307 [2024-09-30 23:02:20.140170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.307 [2024-09-30 23:02:20.140187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.307 qpair failed and we were unable to recover it. 00:33:53.307 [2024-09-30 23:02:20.150019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.307 [2024-09-30 23:02:20.150088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.307 [2024-09-30 23:02:20.150112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.150122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.150129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.150149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.160118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.160206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.160223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.160231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.160238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.160255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.170060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.170133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.170149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.170157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.170164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.170181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.180145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.180209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.180225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.180235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.180243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.180260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.190195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.190255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.190271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.190284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.190290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.190307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.200210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.200274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.200290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.200298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.200304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.200320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.210153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.210218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.210234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.210241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.210248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.210264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.220247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.220309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.220325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.220332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.220339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.220355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.230278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.230345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.230361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.230368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.230375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.230391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.240304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.240372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.240389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.240396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.240403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.240419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.250416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.250518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.250534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.250541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.250548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.250565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.260372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.260433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.260449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.260456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.260463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.260479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.270283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.270344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.270360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.270368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.270375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.270391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.280415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.308 [2024-09-30 23:02:20.280485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.308 [2024-09-30 23:02:20.280505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.308 [2024-09-30 23:02:20.280513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.308 [2024-09-30 23:02:20.280519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.308 [2024-09-30 23:02:20.280536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.308 qpair failed and we were unable to recover it. 00:33:53.308 [2024-09-30 23:02:20.290382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.309 [2024-09-30 23:02:20.290458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.309 [2024-09-30 23:02:20.290474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.309 [2024-09-30 23:02:20.290482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.309 [2024-09-30 23:02:20.290488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.309 [2024-09-30 23:02:20.290504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.309 qpair failed and we were unable to recover it. 00:33:53.309 [2024-09-30 23:02:20.300453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.309 [2024-09-30 23:02:20.300517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.309 [2024-09-30 23:02:20.300532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.309 [2024-09-30 23:02:20.300540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.309 [2024-09-30 23:02:20.300546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.309 [2024-09-30 23:02:20.300562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.309 qpair failed and we were unable to recover it. 00:33:53.309 [2024-09-30 23:02:20.310518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.309 [2024-09-30 23:02:20.310583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.309 [2024-09-30 23:02:20.310609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.309 [2024-09-30 23:02:20.310616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.309 [2024-09-30 23:02:20.310623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.309 [2024-09-30 23:02:20.310643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.309 qpair failed and we were unable to recover it. 00:33:53.309 [2024-09-30 23:02:20.320592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.309 [2024-09-30 23:02:20.320708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.309 [2024-09-30 23:02:20.320724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.309 [2024-09-30 23:02:20.320732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.309 [2024-09-30 23:02:20.320738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.309 [2024-09-30 23:02:20.320755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.309 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.330591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.330680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.330697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.330705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.330712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.330729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.340476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.340545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.340561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.340568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.340575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.340590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.350622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.350684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.350700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.350707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.350714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.350730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.360673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.360741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.360757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.360765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.360772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.360787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.370617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.370707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.370736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.370743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.370749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.370773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.380734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.380794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.380810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.380818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.380824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.380840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.390843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.390914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.571 [2024-09-30 23:02:20.390930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.571 [2024-09-30 23:02:20.390937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.571 [2024-09-30 23:02:20.390944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.571 [2024-09-30 23:02:20.390960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.571 qpair failed and we were unable to recover it. 00:33:53.571 [2024-09-30 23:02:20.400715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.571 [2024-09-30 23:02:20.400779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.400795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.400802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.400808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.400824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.410845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.410920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.410936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.410943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.410950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.410972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.420724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.420822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.420839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.420846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.420853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.420869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.430788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.430869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.430886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.430900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.430907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.430924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.440963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.441046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.441063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.441071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.441078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.441094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.450985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.451061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.451076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.451083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.451089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.451105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.460995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.461059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.461080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.461087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.461094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.461110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.470982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.471044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.471059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.471067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.471073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.471090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.481014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.481081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.481097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.481104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.481111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.481126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.491086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.491167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.491181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.491188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.491195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.491210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.501106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.501168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.501183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.501191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.501203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.501218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.511158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.511215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.511230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.511237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.511243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.511259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.521211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.521278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.521293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.521300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.521306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.572 [2024-09-30 23:02:20.521321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.572 qpair failed and we were unable to recover it. 00:33:53.572 [2024-09-30 23:02:20.531227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.572 [2024-09-30 23:02:20.531292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.572 [2024-09-30 23:02:20.531307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.572 [2024-09-30 23:02:20.531314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.572 [2024-09-30 23:02:20.531321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.531336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.573 [2024-09-30 23:02:20.541281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.573 [2024-09-30 23:02:20.541352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.573 [2024-09-30 23:02:20.541368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.573 [2024-09-30 23:02:20.541375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.573 [2024-09-30 23:02:20.541381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.541397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.573 [2024-09-30 23:02:20.551273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.573 [2024-09-30 23:02:20.551338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.573 [2024-09-30 23:02:20.551354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.573 [2024-09-30 23:02:20.551361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.573 [2024-09-30 23:02:20.551368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.551383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.573 [2024-09-30 23:02:20.561294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.573 [2024-09-30 23:02:20.561358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.573 [2024-09-30 23:02:20.561374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.573 [2024-09-30 23:02:20.561381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.573 [2024-09-30 23:02:20.561387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.561402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.573 [2024-09-30 23:02:20.571354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.573 [2024-09-30 23:02:20.571431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.573 [2024-09-30 23:02:20.571447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.573 [2024-09-30 23:02:20.571454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.573 [2024-09-30 23:02:20.571461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.571477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.573 [2024-09-30 23:02:20.581334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.573 [2024-09-30 23:02:20.581438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.573 [2024-09-30 23:02:20.581453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.573 [2024-09-30 23:02:20.581460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.573 [2024-09-30 23:02:20.581467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.573 [2024-09-30 23:02:20.581482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.573 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.591271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.591364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.591379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.591387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.591399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.591415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.601280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.601380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.601397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.601405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.601411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.601433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.611464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.611543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.611559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.611566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.611573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.611589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.621351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.621420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.621435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.621443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.621449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.621464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.631502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.631567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.631582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.631589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.631596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.631611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.641521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.641615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.641631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.641638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.641645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.641661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.651575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.651646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.651661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.651668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.651675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.651691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.661562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.661660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.661676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.661683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.661689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.661705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.671641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.671714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.671729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.671737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.671743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.671759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.681654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.681714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.681731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.681744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-09-30 23:02:20.681752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.842 [2024-09-30 23:02:20.681769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-09-30 23:02:20.691629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-09-30 23:02:20.691702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-09-30 23:02:20.691721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-09-30 23:02:20.691728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.691735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.691752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.701702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.701770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.701786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.701793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.701800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.701816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.711728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.711790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.711805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.711812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.711819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.711834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.721779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.721842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.721858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.721865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.721871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.721887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.731797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.731868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.731884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.731892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.731903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.731920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.741820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.741881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.741903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.741911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.741918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.741933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.751843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.751907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.751923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.751930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.751937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.751952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.761881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.761953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.761969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.761976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.761983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.761998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.771943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.772010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.772025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.772037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.772044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.772059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.781933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.781999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.782016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.782024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.782030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.782046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.791954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.792065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.792081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.792088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.792094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.792111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.802000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.802067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.802083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.802090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.802096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.802112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.812069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.812130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.812145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.812153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.812159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.812175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.822110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-09-30 23:02:20.822208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-09-30 23:02:20.822224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-09-30 23:02:20.822232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-09-30 23:02:20.822238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.843 [2024-09-30 23:02:20.822254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-09-30 23:02:20.832094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.844 [2024-09-30 23:02:20.832162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.844 [2024-09-30 23:02:20.832177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.844 [2024-09-30 23:02:20.832184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.844 [2024-09-30 23:02:20.832191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.844 [2024-09-30 23:02:20.832208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.844 qpair failed and we were unable to recover it. 00:33:53.844 [2024-09-30 23:02:20.842069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.844 [2024-09-30 23:02:20.842144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.844 [2024-09-30 23:02:20.842160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.844 [2024-09-30 23:02:20.842167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.844 [2024-09-30 23:02:20.842173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.844 [2024-09-30 23:02:20.842189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.844 qpair failed and we were unable to recover it. 00:33:53.844 [2024-09-30 23:02:20.852189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.844 [2024-09-30 23:02:20.852254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.844 [2024-09-30 23:02:20.852270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.844 [2024-09-30 23:02:20.852277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.844 [2024-09-30 23:02:20.852283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:53.844 [2024-09-30 23:02:20.852299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:53.844 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.862192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.862286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.862306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.862313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.862320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.862335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.872116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.872177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.872193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.872200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.872207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.872222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.882232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.882296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.882312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.882319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.882325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.882340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.892297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.892372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.892387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.892394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.892401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.892416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.902316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.902391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.902406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.902413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.902420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.902440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.912341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.912398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.912413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.912420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.912426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.912442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.922251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.922321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.922336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.922343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.922349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.922365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.932293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.932366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.932382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.932390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.932397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.932413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.942431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.942510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.942525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.942532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.942539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.942553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.952253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.952307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.952328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.952335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.224 [2024-09-30 23:02:20.952342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.224 [2024-09-30 23:02:20.952361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-09-30 23:02:20.962332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.224 [2024-09-30 23:02:20.962395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.224 [2024-09-30 23:02:20.962410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.224 [2024-09-30 23:02:20.962418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:20.962424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:20.962439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:20.972527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:20.972611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:20.972625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:20.972632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:20.972639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:20.972653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:20.982381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:20.982430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:20.982444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:20.982451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:20.982457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:20.982471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:20.992494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:20.992545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:20.992559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:20.992566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:20.992573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:20.992592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.002571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.002628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.002641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.002649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.002655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.002669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.012607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.012684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.012698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.012705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.012711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.012726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.022627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.022686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.022714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.022723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.022729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.022750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.032593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.032644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.032660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.032667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.032674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.032689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.042683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.042751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.042767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.042774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.042781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.042796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.052698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.052757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.052771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.052778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.052785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.052800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.062605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.062661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.062677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.062684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.062690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.062706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.072717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.072764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.072778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.072785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.072791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.072806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.082648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.082706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.082720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.082727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.082738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.082752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-09-30 23:02:21.092802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.225 [2024-09-30 23:02:21.092860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.225 [2024-09-30 23:02:21.092873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.225 [2024-09-30 23:02:21.092880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.225 [2024-09-30 23:02:21.092886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.225 [2024-09-30 23:02:21.092905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.102776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.102835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.102849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.102855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.102862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.102876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.112681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.112727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.112740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.112747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.112753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.112767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.122888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.122953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.122966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.122972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.122978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.122993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.132926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.132988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.133001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.133008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.133015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.133029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.142811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.142864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.142885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.142892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.142904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.142929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.152781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.152828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.152841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.152848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.152855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.152869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.162985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.163042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.163055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.163062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.163068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.163082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.172990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.173045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.173058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.173070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.173077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.173091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.182984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.183040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.183053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.183060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.183066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.183080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.193035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.193085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.193097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.193104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.193111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.193125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.203000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.203068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.203080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.203087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.203094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.203107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-09-30 23:02:21.213118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.226 [2024-09-30 23:02:21.213175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.226 [2024-09-30 23:02:21.213188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.226 [2024-09-30 23:02:21.213194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.226 [2024-09-30 23:02:21.213201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.226 [2024-09-30 23:02:21.213214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.487 [2024-09-30 23:02:21.223177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.487 [2024-09-30 23:02:21.223227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.487 [2024-09-30 23:02:21.223240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.487 [2024-09-30 23:02:21.223247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.487 [2024-09-30 23:02:21.223253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.487 [2024-09-30 23:02:21.223267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.487 qpair failed and we were unable to recover it. 00:33:54.487 [2024-09-30 23:02:21.233154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.487 [2024-09-30 23:02:21.233211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.487 [2024-09-30 23:02:21.233224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.487 [2024-09-30 23:02:21.233231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.487 [2024-09-30 23:02:21.233237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.487 [2024-09-30 23:02:21.233251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.487 qpair failed and we were unable to recover it. 00:33:54.487 [2024-09-30 23:02:21.243241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.487 [2024-09-30 23:02:21.243342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.487 [2024-09-30 23:02:21.243356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.243363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.243369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.243383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.253252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.253304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.253316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.253323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.253329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.253343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.263233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.263275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.263288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.263298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.263305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.263318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.273213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.273260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.273272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.273279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.273286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.273299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.283320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.283377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.283390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.283397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.283403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.283416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.293261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.293357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.293370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.293377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.293383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.293399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.303326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.303372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.303385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.303392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.303398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.303411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.313257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.313304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.313317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.313323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.313330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.313343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.323412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.323469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.323481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.323488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.323495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.323508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.333447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.333500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.333513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.333519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.333525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.333539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.343437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.343489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.343502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.343509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.343515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.343529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.353473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.353517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.353533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.353540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.353546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.353560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.363534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.363591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.363604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.363611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.363617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.488 [2024-09-30 23:02:21.363631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.488 qpair failed and we were unable to recover it. 00:33:54.488 [2024-09-30 23:02:21.373434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.488 [2024-09-30 23:02:21.373495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.488 [2024-09-30 23:02:21.373507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.488 [2024-09-30 23:02:21.373514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.488 [2024-09-30 23:02:21.373520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.373534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.383581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.383637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.383652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.383659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.383665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.383683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.393617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.393695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.393708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.393715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.393721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.393739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.403522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.403576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.403589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.403596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.403602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.403616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.413684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.413752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.413765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.413772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.413778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.413792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.423649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.423698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.423711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.423718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.423724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.423738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.433680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.433773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.433785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.433792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.433798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.433813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.443866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.443933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.443950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.443957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.443963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.443977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.453839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.453891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.453908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.453915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.453921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.453935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.463810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.463858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.463871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.463877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.463884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.463902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.473702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.473749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.473762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.473769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.473775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.473788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.483750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.483805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.483818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.483825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.483831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.483848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.489 [2024-09-30 23:02:21.493903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.489 [2024-09-30 23:02:21.493955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.489 [2024-09-30 23:02:21.493968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.489 [2024-09-30 23:02:21.493975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.489 [2024-09-30 23:02:21.493981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.489 [2024-09-30 23:02:21.493995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.489 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.503884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.503936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.503949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.503956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.503962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.503976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.513904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.513954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.513967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.513974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.513980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.513994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.523984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.524036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.524049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.524055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.524062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.524076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.533946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.534001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.534016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.534023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.534030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.534044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.543987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.544080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.544093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.544100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.544107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.544122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.554040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.554132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.554145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.554152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.554158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.554172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.563969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.564021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.564034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.564041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.564047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.564061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.574143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.574194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.574208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.574215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.574224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.574238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.584105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.584157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.584170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.584177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.584183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.584197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.594098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.594143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.594156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.594163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.594169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.594183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.604226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.604280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.604292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.604299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.604305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.604319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.614242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.614293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.752 [2024-09-30 23:02:21.614307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.752 [2024-09-30 23:02:21.614314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.752 [2024-09-30 23:02:21.614320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.752 [2024-09-30 23:02:21.614337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.752 qpair failed and we were unable to recover it. 00:33:54.752 [2024-09-30 23:02:21.624205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.752 [2024-09-30 23:02:21.624256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.624269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.624276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.624282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.624296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.634127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.634174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.634187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.634194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.634200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.634215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.644308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.644357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.644371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.644378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.644385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.644398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.654345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.654399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.654412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.654418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.654425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.654438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.664217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.664281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.664294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.664300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.664310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.664324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.674340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.674391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.674403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.674411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.674417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.674430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.684301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.684403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.684416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.684424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.684432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.684446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.694462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.694516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.694528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.694535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.694541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.694555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.704312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.704359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.704372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.704379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.704385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.704399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.714457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.714519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.714532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.714539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.714545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.714559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.724547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.724633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.724646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.724652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.724659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.724672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.734575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.734630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.734643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.734650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.734656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.734670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.744549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.744602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.744614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.744621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.744627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.744641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.753 [2024-09-30 23:02:21.754572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.753 [2024-09-30 23:02:21.754617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.753 [2024-09-30 23:02:21.754629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.753 [2024-09-30 23:02:21.754642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.753 [2024-09-30 23:02:21.754649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.753 [2024-09-30 23:02:21.754663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.753 qpair failed and we were unable to recover it. 00:33:54.754 [2024-09-30 23:02:21.764569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.754 [2024-09-30 23:02:21.764621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.754 [2024-09-30 23:02:21.764634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.754 [2024-09-30 23:02:21.764641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.754 [2024-09-30 23:02:21.764647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:54.754 [2024-09-30 23:02:21.764660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.754 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.774666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.774748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.774761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.774768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.774774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.774788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.784661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.784708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.784720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.784728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.784734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.784748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.794671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.794718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.794731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.794737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.794744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.794758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.804748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.804804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.804817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.804824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.804830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.804844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.814690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.814795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.814809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.814816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.814823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.814841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.824759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.824804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.824817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.824824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.824831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.824845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.834792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.834842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.834855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.834862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.834869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.834884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.844858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.844912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.844928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.844935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.844942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.844956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.854898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.854990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.855003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.855010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.855017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.855030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.864874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.864921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.864934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.864941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.864947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.864961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.874905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.874954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.874966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.874973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.874979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.874993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.884841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.884901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.884914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.884921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.016 [2024-09-30 23:02:21.884927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.016 [2024-09-30 23:02:21.884941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.016 qpair failed and we were unable to recover it. 00:33:55.016 [2024-09-30 23:02:21.894868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.016 [2024-09-30 23:02:21.894942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.016 [2024-09-30 23:02:21.894954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.016 [2024-09-30 23:02:21.894961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.894967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.894982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.904980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.905030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.905043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.905050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.905056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.905070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.914973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.915029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.915041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.915049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.915055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.915069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.925075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.925133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.925147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.925153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.925160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.925173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.935123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.935183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.935199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.935207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.935214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.935229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.944948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.944993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.945006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.945013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.945019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.945033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.955120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.955168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.955183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.955190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.955196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.955212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.965151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.965206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.965219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.965226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.965233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.965246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.975192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.975245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.975257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.975264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.975270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.975288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.985180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.985228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.985240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.985247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.985253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.985267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:21.995198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:21.995244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:21.995256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:21.995263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:21.995270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:21.995283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:22.005288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:22.005342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:22.005354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:22.005361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:22.005368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:22.005381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:22.015166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:22.015215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:22.015228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:22.015234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:22.015241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:22.015254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.017 [2024-09-30 23:02:22.025299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.017 [2024-09-30 23:02:22.025342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.017 [2024-09-30 23:02:22.025358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.017 [2024-09-30 23:02:22.025365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.017 [2024-09-30 23:02:22.025371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.017 [2024-09-30 23:02:22.025385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.017 qpair failed and we were unable to recover it. 00:33:55.279 [2024-09-30 23:02:22.035317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.279 [2024-09-30 23:02:22.035367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.279 [2024-09-30 23:02:22.035380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.279 [2024-09-30 23:02:22.035387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.279 [2024-09-30 23:02:22.035393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.279 [2024-09-30 23:02:22.035407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.279 qpair failed and we were unable to recover it. 00:33:55.279 [2024-09-30 23:02:22.045388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.279 [2024-09-30 23:02:22.045443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.279 [2024-09-30 23:02:22.045456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.279 [2024-09-30 23:02:22.045463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.279 [2024-09-30 23:02:22.045470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.279 [2024-09-30 23:02:22.045483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.279 qpair failed and we were unable to recover it. 00:33:55.279 [2024-09-30 23:02:22.055386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.279 [2024-09-30 23:02:22.055435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.279 [2024-09-30 23:02:22.055447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.279 [2024-09-30 23:02:22.055454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.279 [2024-09-30 23:02:22.055461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.279 [2024-09-30 23:02:22.055475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.279 qpair failed and we were unable to recover it. 00:33:55.279 [2024-09-30 23:02:22.065386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.279 [2024-09-30 23:02:22.065434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.279 [2024-09-30 23:02:22.065446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.279 [2024-09-30 23:02:22.065453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.279 [2024-09-30 23:02:22.065464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.279 [2024-09-30 23:02:22.065478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.279 qpair failed and we were unable to recover it. 00:33:55.279 [2024-09-30 23:02:22.075399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.279 [2024-09-30 23:02:22.075446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.279 [2024-09-30 23:02:22.075459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.279 [2024-09-30 23:02:22.075466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.279 [2024-09-30 23:02:22.075472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.075486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.085482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.085538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.085550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.085557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.085564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.085577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.095479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.095529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.095541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.095548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.095554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.095568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.105513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.105557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.105571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.105578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.105584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.105598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.115555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.115604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.115616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.115623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.115629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.115643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.125615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.125671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.125684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.125691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.125697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.125711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.135475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.135576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.135590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.135597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.135603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.135622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.145630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.145699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.145713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.145720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.145726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.145740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.155690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.155735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.155748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.155755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.155764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.155779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.165702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.165756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.165769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.165775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.165782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.165795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.175742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.175790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.175803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.175810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.175816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.175830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.185620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.185708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.185721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.185728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.185735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.185748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.195762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.195837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.195849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.195856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.195862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.195876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.205861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.280 [2024-09-30 23:02:22.205920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.280 [2024-09-30 23:02:22.205933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.280 [2024-09-30 23:02:22.205940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.280 [2024-09-30 23:02:22.205946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.280 [2024-09-30 23:02:22.205961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.280 qpair failed and we were unable to recover it. 00:33:55.280 [2024-09-30 23:02:22.215854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.215938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.215950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.215957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.215964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.215978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.225842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.225895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.225909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.225916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.225922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.225935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.235921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.235970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.235983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.235990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.235996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.236010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.245944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.245996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.246009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.246019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.246025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.246039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.255942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.255991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.256004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.256011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.256018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.256032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.265973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.266024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.266037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.266044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.266050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.266064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.275988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.276073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.276086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.276093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.276099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.276113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.281 [2024-09-30 23:02:22.286058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.281 [2024-09-30 23:02:22.286112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.281 [2024-09-30 23:02:22.286125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.281 [2024-09-30 23:02:22.286132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.281 [2024-09-30 23:02:22.286138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.281 [2024-09-30 23:02:22.286152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.281 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.296060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.296111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.296124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.296130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.296137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.296150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.306033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.306083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.306095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.306102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.306109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.306123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.316075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.316122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.316135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.316142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.316148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.316162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.326155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.326209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.326222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.326229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.326235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.326249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.336180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.336235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.336248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.336258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.336264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.336278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.346170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.346221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.346234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.346241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.346247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.346261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.356189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.356237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.356250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.356257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.356263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.356277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.366283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.366347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.366360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.366367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.366374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.366387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.376272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.376321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.376335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.376342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.376348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.376362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.386175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.386222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.386235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.386241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.386248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.386261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.396307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.396352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.396365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.396372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.396378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.396392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.406390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.406440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.544 [2024-09-30 23:02:22.406453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.544 [2024-09-30 23:02:22.406460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.544 [2024-09-30 23:02:22.406466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.544 [2024-09-30 23:02:22.406479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.544 qpair failed and we were unable to recover it. 00:33:55.544 [2024-09-30 23:02:22.416389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.544 [2024-09-30 23:02:22.416473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.416485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.416492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.416499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.416512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.426393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.426465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.426482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.426489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.426496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.426514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.436420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.436473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.436486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.436493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.436500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.436515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.446501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.446587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.446601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.446608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.446615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.446629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.456471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.456520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.456532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.456539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.456545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.456559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.466503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.466554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.466567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.466574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.466580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.466600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.476536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.476580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.476592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.476599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.476605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.476619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.486602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.486659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.486683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.486692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.486699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.486718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.496615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.496667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.496682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.496689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.496696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.496710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.506608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.506679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.506693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.506700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.506706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.506720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.516632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.516685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.516703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.516710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.516716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.516730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.526712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.526774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.526798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.526807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.526814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.526832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.536718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.536768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.536783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.536790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.536797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.536811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.545 [2024-09-30 23:02:22.546589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.545 [2024-09-30 23:02:22.546635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.545 [2024-09-30 23:02:22.546649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.545 [2024-09-30 23:02:22.546656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.545 [2024-09-30 23:02:22.546662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.545 [2024-09-30 23:02:22.546677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.545 qpair failed and we were unable to recover it. 00:33:55.546 [2024-09-30 23:02:22.556742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.546 [2024-09-30 23:02:22.556788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.546 [2024-09-30 23:02:22.556801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.546 [2024-09-30 23:02:22.556808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.546 [2024-09-30 23:02:22.556814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.546 [2024-09-30 23:02:22.556833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.546 qpair failed and we were unable to recover it. 00:33:55.808 [2024-09-30 23:02:22.566823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.808 [2024-09-30 23:02:22.566877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.808 [2024-09-30 23:02:22.566890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.808 [2024-09-30 23:02:22.566901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.808 [2024-09-30 23:02:22.566907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.808 [2024-09-30 23:02:22.566922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-09-30 23:02:22.576815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.808 [2024-09-30 23:02:22.576866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.808 [2024-09-30 23:02:22.576878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.808 [2024-09-30 23:02:22.576885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.808 [2024-09-30 23:02:22.576892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.808 [2024-09-30 23:02:22.576910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-09-30 23:02:22.586751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.808 [2024-09-30 23:02:22.586798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.808 [2024-09-30 23:02:22.586811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.808 [2024-09-30 23:02:22.586818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.808 [2024-09-30 23:02:22.586824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.808 [2024-09-30 23:02:22.586838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.808 qpair failed and we were unable to recover it. 00:33:55.808 [2024-09-30 23:02:22.596781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.808 [2024-09-30 23:02:22.596830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.596842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.596849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.596856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.596870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.606953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.607020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.607033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.607040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.607046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.607060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.616800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.616857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.616869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.616876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.616883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.616899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.626963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.627014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.627027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.627034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.627041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.627054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.636974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.637022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.637035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.637041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.637048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.637062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.647039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.647092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.647105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.647112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.647122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.647136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.657034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.657086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.657099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.657107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.657113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.657127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.667032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.667084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.667096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.667103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.667110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.667123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.677071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.677124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.677137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.677143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.677150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.677164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.687152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.687250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.687263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.687270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.687277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.687292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.697168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.697222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.697235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.697242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.697248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.697262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.707141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.707185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.707198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.707205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.707212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.707225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.717210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.717259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.809 [2024-09-30 23:02:22.717272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.809 [2024-09-30 23:02:22.717279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.809 [2024-09-30 23:02:22.717285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.809 [2024-09-30 23:02:22.717299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.809 qpair failed and we were unable to recover it. 00:33:55.809 [2024-09-30 23:02:22.727271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.809 [2024-09-30 23:02:22.727324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.727337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.727344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.727350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.727363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.737239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.737286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.737298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.737309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.737315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.737328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.747276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.747357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.747370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.747376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.747383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.747396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.757325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.757374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.757387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.757393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.757400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.757413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.767371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.767440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.767453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.767460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.767466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.767480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.777278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.777329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.777342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.777348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.777354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.777368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.787348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.787396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.787409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.787416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.787422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.787436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.797401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.797451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.797463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.797470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.797476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.797489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.807492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.807548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.807560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.807567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.807574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.807587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:55.810 [2024-09-30 23:02:22.817487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.810 [2024-09-30 23:02:22.817535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.810 [2024-09-30 23:02:22.817548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.810 [2024-09-30 23:02:22.817555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.810 [2024-09-30 23:02:22.817561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:55.810 [2024-09-30 23:02:22.817574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.810 qpair failed and we were unable to recover it. 00:33:56.072 [2024-09-30 23:02:22.827494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.072 [2024-09-30 23:02:22.827566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.072 [2024-09-30 23:02:22.827579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.072 [2024-09-30 23:02:22.827590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.072 [2024-09-30 23:02:22.827596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.072 [2024-09-30 23:02:22.827609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.072 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.837544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.837633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.837646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.837653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.837659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.837673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.847575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.847634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.847658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.847667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.847674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.847693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.857584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.857640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.857664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.857673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.857680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.857699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.867605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.867658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.867682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.867691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.867698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.867716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.877617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.877706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.877721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.877728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.877735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.877752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.887688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.887746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.887770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.887779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.887786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.887804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.897562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.897615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.897629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.897637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.897643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.897658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.907584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.907646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.907659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.907666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.907672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.907686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.917709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.917758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.917775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.917782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.917788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.917802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.927747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.927801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.927814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.927821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.927827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.927841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.937793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.937845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.937858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.937865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.937871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.937885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.947909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.947983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.947996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.948003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.948009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.948023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.957711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.957775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.073 [2024-09-30 23:02:22.957787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.073 [2024-09-30 23:02:22.957794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.073 [2024-09-30 23:02:22.957801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.073 [2024-09-30 23:02:22.957818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.073 qpair failed and we were unable to recover it. 00:33:56.073 [2024-09-30 23:02:22.967786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.073 [2024-09-30 23:02:22.967839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:22.967852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:22.967859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:22.967865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:22.967879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:22.977913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:22.977963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:22.977976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:22.977983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:22.977989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:22.978003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:22.987939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:22.988007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:22.988019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:22.988026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:22.988032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:22.988046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:22.997864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:22.997927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:22.997940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:22.997947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:22.997953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:22.997967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.008019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.008074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.008090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.008097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.008103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.008117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.018012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.018066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.018078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.018085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.018092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.018106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.027953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.028003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.028016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.028022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.028029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.028042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.038003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.038050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.038063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.038070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.038076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.038090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.048141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.048193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.048205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.048213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.048219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.048236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.058004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.058060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.058073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.058080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.058086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.058100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.068020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.068067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.068080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.068087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.068093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.068107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.074 [2024-09-30 23:02:23.078084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.074 [2024-09-30 23:02:23.078141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.074 [2024-09-30 23:02:23.078153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.074 [2024-09-30 23:02:23.078160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.074 [2024-09-30 23:02:23.078167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.074 [2024-09-30 23:02:23.078180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.074 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.088236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.088294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.088307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.088314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.088321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.088334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.098248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.098298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.098314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.098322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.098328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.098342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.108248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.108296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.108309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.108316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.108322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.108336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.118305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.118347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.118360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.118367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.118373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.118387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.128362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.128422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.128435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.128441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.128448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.128461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.138355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.138406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.138418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.138425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.138435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.138449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.148232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.148283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.148298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.148305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.148311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.148326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.158253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.158297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.158310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.158317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.158324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.336 [2024-09-30 23:02:23.158338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.336 qpair failed and we were unable to recover it. 00:33:56.336 [2024-09-30 23:02:23.168432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.336 [2024-09-30 23:02:23.168487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.336 [2024-09-30 23:02:23.168500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.336 [2024-09-30 23:02:23.168507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.336 [2024-09-30 23:02:23.168514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.168527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.178458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.178538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.178550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.178557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.178563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.178577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.188327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.188376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.188389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.188396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.188402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.188416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.198463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.198508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.198522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.198529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.198535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.198549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.208547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.208599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.208612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.208619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.208625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.208639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.218559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.218608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.218621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.218628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.218634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.218647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.228575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.228625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.228638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.228645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.228655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.228669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.238632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.238681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.238694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.238701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.238707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.238721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.248650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.248700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.248713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.248720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.248727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.248741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.258630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.258678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.258691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.258697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.258704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.258718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.268679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.268766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.268779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.268786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.268793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.268806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.278594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.278639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.278652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.337 [2024-09-30 23:02:23.278659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.337 [2024-09-30 23:02:23.278666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.337 [2024-09-30 23:02:23.278679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.337 qpair failed and we were unable to recover it. 00:33:56.337 [2024-09-30 23:02:23.288783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.337 [2024-09-30 23:02:23.288835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.337 [2024-09-30 23:02:23.288848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.288855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.288861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.288875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.298782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.298871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.298884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.298891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.298902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.298917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.308666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.308728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.308741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.308748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.308754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.308768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.318835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.318883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.318899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.318910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.318916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.318930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.328818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.328868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.328881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.328888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.328897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.328911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.338881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.338935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.338948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.338955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.338961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.338975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.338 [2024-09-30 23:02:23.348948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.338 [2024-09-30 23:02:23.348995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.338 [2024-09-30 23:02:23.349008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.338 [2024-09-30 23:02:23.349015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.338 [2024-09-30 23:02:23.349021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.338 [2024-09-30 23:02:23.349035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.338 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.358904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.358983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.358996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.359003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.359009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.359023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.369007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.369062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.369075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.369082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.369089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.369102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.378987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.379048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.379061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.379068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.379074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.379088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.388985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.389052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.389065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.389072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.389078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.389091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.399046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.399098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.399111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.399118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.399124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.399138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.409088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.409144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.409160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.409167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.409173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.409187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.419114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.419200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.419212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.419219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.419225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.419239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.429179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.429229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.429242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.429249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.429255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.429269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.439161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.439204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.439217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.439225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.600 [2024-09-30 23:02:23.439232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.600 [2024-09-30 23:02:23.439245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.600 qpair failed and we were unable to recover it. 00:33:56.600 [2024-09-30 23:02:23.449250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.600 [2024-09-30 23:02:23.449334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.600 [2024-09-30 23:02:23.449348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.600 [2024-09-30 23:02:23.449354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.449361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.449374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.459234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.459281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.459293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.459300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.459307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.459320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.469238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.469283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.469296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.469303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.469309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.469323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.479302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.479382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.479395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.479401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.479408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.479422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.489343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.489392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.489405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.489412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.489418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.489431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.499339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.499421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.499437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.499444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.499450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.499464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.509341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.509389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.509402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.509409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.509415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.509428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.519369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.519451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.519464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.519471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.519477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.519491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.529431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.529485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.529498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.529505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.529511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.529525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.539439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.539487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.539500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.539507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.539513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.539530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.549449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.549495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.549508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.549515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.549521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.549535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.559451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.559492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.559505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.559512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.559518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.559531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.569550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.569602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.569615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.569622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.569629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.601 [2024-09-30 23:02:23.569642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.601 qpair failed and we were unable to recover it. 00:33:56.601 [2024-09-30 23:02:23.579549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.601 [2024-09-30 23:02:23.579604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.601 [2024-09-30 23:02:23.579628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.601 [2024-09-30 23:02:23.579637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.601 [2024-09-30 23:02:23.579645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.602 [2024-09-30 23:02:23.579663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.602 qpair failed and we were unable to recover it. 00:33:56.602 [2024-09-30 23:02:23.589541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.602 [2024-09-30 23:02:23.589593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.602 [2024-09-30 23:02:23.589622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.602 [2024-09-30 23:02:23.589631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.602 [2024-09-30 23:02:23.589638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.602 [2024-09-30 23:02:23.589656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.602 qpair failed and we were unable to recover it. 00:33:56.602 [2024-09-30 23:02:23.599477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.602 [2024-09-30 23:02:23.599531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.602 [2024-09-30 23:02:23.599547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.602 [2024-09-30 23:02:23.599554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.602 [2024-09-30 23:02:23.599560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.602 [2024-09-30 23:02:23.599576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.602 qpair failed and we were unable to recover it. 00:33:56.602 [2024-09-30 23:02:23.609525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.602 [2024-09-30 23:02:23.609577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.602 [2024-09-30 23:02:23.609591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.602 [2024-09-30 23:02:23.609598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.602 [2024-09-30 23:02:23.609605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.602 [2024-09-30 23:02:23.609624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.602 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.619531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.619579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.619592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.619599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.619605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.865 [2024-09-30 23:02:23.619619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.865 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.629650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.629704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.629717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.629725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.629735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.865 [2024-09-30 23:02:23.629750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.865 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.639681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.639732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.639745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.639752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.639758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.865 [2024-09-30 23:02:23.639772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.865 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.649754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.649807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.649820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.649828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.649835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.865 [2024-09-30 23:02:23.649849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.865 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.659719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.659773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.659787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.659794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.659800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.865 [2024-09-30 23:02:23.659814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.865 qpair failed and we were unable to recover it. 00:33:56.865 [2024-09-30 23:02:23.669772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.865 [2024-09-30 23:02:23.669857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.865 [2024-09-30 23:02:23.669870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.865 [2024-09-30 23:02:23.669877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.865 [2024-09-30 23:02:23.669884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.669903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.679797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.679902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.679914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.679922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.679928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.679943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.689874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.689939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.689953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.689961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.689968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.689982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.699840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.699907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.699921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.699928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.699936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.699954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.709760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.709815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.709828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.709835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.709841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.709856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.719889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.719979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.719992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.719999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.720009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.720023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.729841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.729901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.729914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.729921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.729927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.729941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.739971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.740022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.740035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.740042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.740048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.740063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.749837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.749888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.749905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.749912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.749918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.749932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.760001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.760050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.760063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.760070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.760077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.760091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.770058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.770138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.770150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.770157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.770164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.770177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.780069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.780137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.780150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.780156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.780163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.780176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.790052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.790099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.790113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.790120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.790126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.790140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.800091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.800137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.866 [2024-09-30 23:02:23.800150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.866 [2024-09-30 23:02:23.800157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.866 [2024-09-30 23:02:23.800163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.866 [2024-09-30 23:02:23.800177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.866 qpair failed and we were unable to recover it. 00:33:56.866 [2024-09-30 23:02:23.810046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.866 [2024-09-30 23:02:23.810115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.810128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.810138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.810145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.810158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.820177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.820225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.820237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.820244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.820250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.820264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.830182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.830230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.830243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.830249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.830256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.830269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.840204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.840251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.840264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.840271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.840277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.840291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.850159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.850211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.850225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.850232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.850238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.850259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.860277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.860322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.860335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.860343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.860349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.860363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:56.867 [2024-09-30 23:02:23.870326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.867 [2024-09-30 23:02:23.870371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.867 [2024-09-30 23:02:23.870384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.867 [2024-09-30 23:02:23.870391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.867 [2024-09-30 23:02:23.870397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:56.867 [2024-09-30 23:02:23.870411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.867 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.880330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.880377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.880389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.880396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.880402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.880417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.890403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.890455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.890468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.890475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.890481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.890495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.900406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.900454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.900466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.900477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.900483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.900497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.910406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.910456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.910469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.910476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.910482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.910496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.920453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.920502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.920516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.920523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.920530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.920545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.930524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.930577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.930591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.930598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.930604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.930618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.940494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.940546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.940559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.940566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.940574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.940588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.950525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.950575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.950587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.950594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.950601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.950614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.960586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.960662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.960676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.960682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.960689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.960702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.970584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.970643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.970660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.970667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.970674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.970688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.129 [2024-09-30 23:02:23.980617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.129 [2024-09-30 23:02:23.980673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.129 [2024-09-30 23:02:23.980697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.129 [2024-09-30 23:02:23.980705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.129 [2024-09-30 23:02:23.980713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.129 [2024-09-30 23:02:23.980732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.129 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:23.990632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:23.990689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:23.990720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:23.990729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:23.990736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:23.990755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.000649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.000702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.000717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.000725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.000731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.000746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.010729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.010795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.010809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.010816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.010822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.010836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.020725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.020811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.020824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.020831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.020838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.020851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.030729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.030775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.030788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.030795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.030802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.030819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.040764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.040836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.040849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.040855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.040862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.040876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.050912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.050984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.050997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.051004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.051010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.051024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.060837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.060898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.060911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.060917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.060923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.060938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.070768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.070817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.070830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.070836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.070842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.070856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.080878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.080974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.080991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.080998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.081004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.081018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.090939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.090991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.091004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.091011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.091017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.091031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.100941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.100991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.101004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.101010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.101017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.101031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.110950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.110999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.111012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.111019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.130 [2024-09-30 23:02:24.111025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.130 [2024-09-30 23:02:24.111039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.130 qpair failed and we were unable to recover it. 00:33:57.130 [2024-09-30 23:02:24.120983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.130 [2024-09-30 23:02:24.121038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.130 [2024-09-30 23:02:24.121050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.130 [2024-09-30 23:02:24.121057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.131 [2024-09-30 23:02:24.121064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.131 [2024-09-30 23:02:24.121081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.131 qpair failed and we were unable to recover it. 00:33:57.131 [2024-09-30 23:02:24.131001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.131 [2024-09-30 23:02:24.131055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.131 [2024-09-30 23:02:24.131068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.131 [2024-09-30 23:02:24.131075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.131 [2024-09-30 23:02:24.131081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.131 [2024-09-30 23:02:24.131095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.131 qpair failed and we were unable to recover it. 00:33:57.131 [2024-09-30 23:02:24.141102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.131 [2024-09-30 23:02:24.141189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.131 [2024-09-30 23:02:24.141202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.131 [2024-09-30 23:02:24.141209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.131 [2024-09-30 23:02:24.141215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.131 [2024-09-30 23:02:24.141229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.131 qpair failed and we were unable to recover it. 00:33:57.392 [2024-09-30 23:02:24.151077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.392 [2024-09-30 23:02:24.151124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.392 [2024-09-30 23:02:24.151137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.392 [2024-09-30 23:02:24.151144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.392 [2024-09-30 23:02:24.151150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.151163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.161103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.161152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.161165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.161172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.161178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.161192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.171216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.171276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.171289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.171296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.171302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.171317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.181136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.181184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.181197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.181204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.181210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.181224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.191191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.191236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.191249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.191256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.191262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.191276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.201200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.201248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.201261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.201268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.201275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.201288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.211346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.211438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.211451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.211458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.211468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.211482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.221257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.221312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.393 [2024-09-30 23:02:24.221326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.393 [2024-09-30 23:02:24.221333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.393 [2024-09-30 23:02:24.221339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.393 [2024-09-30 23:02:24.221355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.393 qpair failed and we were unable to recover it. 00:33:57.393 [2024-09-30 23:02:24.231280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.393 [2024-09-30 23:02:24.231340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.231354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.231361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.231367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.231381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.241305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.241386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.241399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.241406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.241412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.241426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.251380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.251455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.251468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.251475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.251481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.251495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.261285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.261340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.261353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.261360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.261366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.261379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.271411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.271461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.271474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.271481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.271488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.271501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.281456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.281512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.281525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.281532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.281538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.281552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.291482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.291538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.291551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.291558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.291564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.291578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.301475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.394 [2024-09-30 23:02:24.301526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.394 [2024-09-30 23:02:24.301540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.394 [2024-09-30 23:02:24.301551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.394 [2024-09-30 23:02:24.301557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.394 [2024-09-30 23:02:24.301575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.394 qpair failed and we were unable to recover it. 00:33:57.394 [2024-09-30 23:02:24.311398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.311441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.311454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.311461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.311468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.311482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.321514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.321562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.321575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.321582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.321588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.321602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.331595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.331652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.331665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.331672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.331678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.331692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.341553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.341599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.341612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.341619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.341626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.341639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.351588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.351632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.351645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.351652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.351658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.351672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.361486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.361532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.361544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.361551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.361557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.361571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.371740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.371806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.371819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.371826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.371832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.371846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.381709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.381768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.381792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.381800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.381807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.395 [2024-09-30 23:02:24.381826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.395 qpair failed and we were unable to recover it. 00:33:57.395 [2024-09-30 23:02:24.391700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.395 [2024-09-30 23:02:24.391743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.395 [2024-09-30 23:02:24.391758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.395 [2024-09-30 23:02:24.391769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.395 [2024-09-30 23:02:24.391776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.396 [2024-09-30 23:02:24.391791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.396 qpair failed and we were unable to recover it. 00:33:57.396 [2024-09-30 23:02:24.401725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.396 [2024-09-30 23:02:24.401770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.396 [2024-09-30 23:02:24.401783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.396 [2024-09-30 23:02:24.401791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.396 [2024-09-30 23:02:24.401797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.396 [2024-09-30 23:02:24.401812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.396 qpair failed and we were unable to recover it. 00:33:57.656 [2024-09-30 23:02:24.411677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.656 [2024-09-30 23:02:24.411728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.411742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.411749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.411755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.411770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.421781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.421829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.421843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.421850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.421857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.421871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.431795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.431845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.431858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.431865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.431871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.431885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.441830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.441876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.441890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.441902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.441908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.441923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.451867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.451930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.451943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.451950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.451956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.451970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.461903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.461993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.462006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.462013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.462019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.462034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.471925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.472001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.472013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.472020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.472027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.472041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.481843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.657 [2024-09-30 23:02:24.481891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.657 [2024-09-30 23:02:24.481912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.657 [2024-09-30 23:02:24.481919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.657 [2024-09-30 23:02:24.481926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f36d8000b90 00:33:57.657 [2024-09-30 23:02:24.481939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-09-30 23:02:24.482132] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:57.657 A controller has encountered a failure and is being reset. 00:33:57.657 Controller properly reset. 00:33:57.657 Initializing NVMe Controllers 00:33:57.657 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:57.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:57.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:57.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:57.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:57.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:57.657 Initialization complete. Launching workers. 00:33:57.657 Starting thread on core 1 00:33:57.657 Starting thread on core 2 00:33:57.657 Starting thread on core 3 00:33:57.657 Starting thread on core 0 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:57.657 00:33:57.657 real 0m11.484s 00:33:57.657 user 0m21.745s 00:33:57.657 sys 0m3.940s 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:57.657 ************************************ 00:33:57.657 END TEST nvmf_target_disconnect_tc2 00:33:57.657 ************************************ 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.657 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.657 rmmod nvme_tcp 00:33:57.657 rmmod nvme_fabrics 00:33:57.657 rmmod nvme_keyring 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 900163 ']' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 900163 ']' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900163' 00:33:57.918 killing process with pid 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 900163 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.918 23:02:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.463 23:02:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:00.463 00:34:00.463 real 0m22.099s 00:34:00.463 user 0m49.595s 00:34:00.463 sys 0m10.339s 00:34:00.463 23:02:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.463 23:02:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:00.463 ************************************ 00:34:00.463 END TEST nvmf_target_disconnect 00:34:00.463 ************************************ 00:34:00.463 23:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:00.463 00:34:00.463 real 6m35.967s 00:34:00.463 user 11m25.232s 00:34:00.463 sys 2m18.212s 00:34:00.463 23:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.463 23:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.463 ************************************ 00:34:00.463 END TEST nvmf_host 00:34:00.463 ************************************ 00:34:00.463 23:02:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:00.463 23:02:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:00.463 23:02:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:00.463 23:02:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:00.463 23:02:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:00.463 23:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.463 ************************************ 00:34:00.463 START TEST nvmf_target_core_interrupt_mode 00:34:00.463 ************************************ 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:00.463 * Looking for test storage... 00:34:00.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:00.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.463 --rc genhtml_branch_coverage=1 00:34:00.463 --rc genhtml_function_coverage=1 00:34:00.463 --rc genhtml_legend=1 00:34:00.463 --rc geninfo_all_blocks=1 00:34:00.463 --rc geninfo_unexecuted_blocks=1 00:34:00.463 00:34:00.463 ' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:00.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.463 --rc genhtml_branch_coverage=1 00:34:00.463 --rc genhtml_function_coverage=1 00:34:00.463 --rc genhtml_legend=1 00:34:00.463 --rc geninfo_all_blocks=1 00:34:00.463 --rc geninfo_unexecuted_blocks=1 00:34:00.463 00:34:00.463 ' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:00.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.463 --rc genhtml_branch_coverage=1 00:34:00.463 --rc genhtml_function_coverage=1 00:34:00.463 --rc genhtml_legend=1 00:34:00.463 --rc geninfo_all_blocks=1 00:34:00.463 --rc geninfo_unexecuted_blocks=1 00:34:00.463 00:34:00.463 ' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:00.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.463 --rc genhtml_branch_coverage=1 00:34:00.463 --rc genhtml_function_coverage=1 00:34:00.463 --rc genhtml_legend=1 00:34:00.463 --rc geninfo_all_blocks=1 00:34:00.463 --rc geninfo_unexecuted_blocks=1 00:34:00.463 00:34:00.463 ' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.463 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:00.464 ************************************ 00:34:00.464 START TEST nvmf_abort 00:34:00.464 ************************************ 00:34:00.464 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:00.725 * Looking for test storage... 00:34:00.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:00.725 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:00.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.726 --rc genhtml_branch_coverage=1 00:34:00.726 --rc genhtml_function_coverage=1 00:34:00.726 --rc genhtml_legend=1 00:34:00.726 --rc geninfo_all_blocks=1 00:34:00.726 --rc geninfo_unexecuted_blocks=1 00:34:00.726 00:34:00.726 ' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:00.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.726 --rc genhtml_branch_coverage=1 00:34:00.726 --rc genhtml_function_coverage=1 00:34:00.726 --rc genhtml_legend=1 00:34:00.726 --rc geninfo_all_blocks=1 00:34:00.726 --rc geninfo_unexecuted_blocks=1 00:34:00.726 00:34:00.726 ' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:00.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.726 --rc genhtml_branch_coverage=1 00:34:00.726 --rc genhtml_function_coverage=1 00:34:00.726 --rc genhtml_legend=1 00:34:00.726 --rc geninfo_all_blocks=1 00:34:00.726 --rc geninfo_unexecuted_blocks=1 00:34:00.726 00:34:00.726 ' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:00.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.726 --rc genhtml_branch_coverage=1 00:34:00.726 --rc genhtml_function_coverage=1 00:34:00.726 --rc genhtml_legend=1 00:34:00.726 --rc geninfo_all_blocks=1 00:34:00.726 --rc geninfo_unexecuted_blocks=1 00:34:00.726 00:34:00.726 ' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:34:00.726 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:08.869 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:08.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:08.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:08.870 Found net devices under 0000:31:00.0: cvl_0_0 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:08.870 Found net devices under 0000:31:00.1: cvl_0_1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:34:08.870 00:34:08.870 --- 10.0.0.2 ping statistics --- 00:34:08.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.870 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:34:08.870 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:34:08.870 00:34:08.870 --- 10.0.0.1 ping statistics --- 00:34:08.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.871 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=905981 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 905981 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 905981 ']' 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:08.871 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:08.871 [2024-09-30 23:02:35.492942] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:08.871 [2024-09-30 23:02:35.494077] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:34:08.871 [2024-09-30 23:02:35.494129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.871 [2024-09-30 23:02:35.585419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:08.871 [2024-09-30 23:02:35.680237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.871 [2024-09-30 23:02:35.680292] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.871 [2024-09-30 23:02:35.680301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.871 [2024-09-30 23:02:35.680308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.871 [2024-09-30 23:02:35.680320] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.871 [2024-09-30 23:02:35.680492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.871 [2024-09-30 23:02:35.680639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.871 [2024-09-30 23:02:35.680640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.871 [2024-09-30 23:02:35.766677] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:08.871 [2024-09-30 23:02:35.766790] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:08.871 [2024-09-30 23:02:35.767375] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:08.871 [2024-09-30 23:02:35.767655] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:09.443 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 [2024-09-30 23:02:36.353510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 Malloc0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 Delay0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 [2024-09-30 23:02:36.441508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.444 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:34:09.706 [2024-09-30 23:02:36.571151] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:11.751 Initializing NVMe Controllers 00:34:11.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:11.752 controller IO queue size 128 less than required 00:34:11.752 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:11.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:11.752 Initialization complete. Launching workers. 00:34:11.752 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28733 00:34:11.752 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28790, failed to submit 66 00:34:11.752 success 28733, unsuccessful 57, failed 0 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:11.752 rmmod nvme_tcp 00:34:11.752 rmmod nvme_fabrics 00:34:11.752 rmmod nvme_keyring 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 905981 ']' 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 905981 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 905981 ']' 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 905981 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:11.752 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 905981 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 905981' 00:34:12.013 killing process with pid 905981 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 905981 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 905981 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.013 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.559 00:34:14.559 real 0m13.639s 00:34:14.559 user 0m10.910s 00:34:14.559 sys 0m7.112s 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.559 ************************************ 00:34:14.559 END TEST nvmf_abort 00:34:14.559 ************************************ 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:14.559 ************************************ 00:34:14.559 START TEST nvmf_ns_hotplug_stress 00:34:14.559 ************************************ 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:14.559 * Looking for test storage... 00:34:14.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.559 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:14.560 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.561 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:22.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:22.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.696 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:22.696 Found net devices under 0000:31:00.0: cvl_0_0 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:22.697 Found net devices under 0000:31:00.1: cvl_0_1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.697 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:34:22.697 00:34:22.697 --- 10.0.0.2 ping statistics --- 00:34:22.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.697 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:34:22.697 00:34:22.697 --- 10.0.0.1 ping statistics --- 00:34:22.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.697 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=910741 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 910741 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 910741 ']' 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.697 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:22.697 [2024-09-30 23:02:49.140700] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.697 [2024-09-30 23:02:49.141882] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:34:22.697 [2024-09-30 23:02:49.141948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.697 [2024-09-30 23:02:49.234099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.697 [2024-09-30 23:02:49.329384] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.697 [2024-09-30 23:02:49.329444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.697 [2024-09-30 23:02:49.329455] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.697 [2024-09-30 23:02:49.329462] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.697 [2024-09-30 23:02:49.329468] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.697 [2024-09-30 23:02:49.329634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.697 [2024-09-30 23:02:49.329794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.697 [2024-09-30 23:02:49.329796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.697 [2024-09-30 23:02:49.418827] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:22.697 [2024-09-30 23:02:49.418986] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:22.697 [2024-09-30 23:02:49.419326] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:22.697 [2024-09-30 23:02:49.419533] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:22.958 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.958 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:34:22.958 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:22.958 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:22.958 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:23.219 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.219 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:23.219 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:23.219 [2024-09-30 23:02:50.162676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.220 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:23.480 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.740 [2024-09-30 23:02:50.523341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.740 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:23.741 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:24.001 Malloc0 00:34:24.001 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:24.262 Delay0 00:34:24.262 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:24.262 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:24.523 NULL1 00:34:24.523 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:24.784 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=911364 00:34:24.784 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:24.784 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:24.784 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.045 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.045 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:25.045 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:25.306 true 00:34:25.306 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:25.306 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.566 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.827 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:25.827 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:25.827 true 00:34:26.088 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:26.088 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.088 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:26.348 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:26.348 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:26.609 true 00:34:26.609 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:26.609 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.869 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:26.869 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:26.869 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:27.127 true 00:34:27.127 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:27.127 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.386 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.645 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:27.645 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:27.905 true 00:34:27.905 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:27.905 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.905 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:28.164 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:28.164 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:28.423 true 00:34:28.423 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:28.423 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.423 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:28.683 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:28.683 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:28.943 true 00:34:28.943 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:28.943 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.943 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.202 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:29.202 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:29.461 true 00:34:29.461 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:29.461 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:29.461 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.721 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:29.721 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:29.981 true 00:34:29.981 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:29.982 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.241 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.241 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:30.241 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:30.502 true 00:34:30.502 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:30.502 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.762 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.762 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:30.762 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:31.022 true 00:34:31.022 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:31.022 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:31.282 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:31.542 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:31.542 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:31.542 true 00:34:31.542 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:31.542 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:31.802 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:32.062 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:32.062 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:32.062 true 00:34:32.062 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:32.062 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.322 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:32.581 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:32.581 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:32.581 true 00:34:32.841 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:32.841 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.841 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.101 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:33.101 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:33.362 true 00:34:33.362 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:33.362 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.362 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.622 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:33.622 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:33.883 true 00:34:33.883 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:33.883 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:34.143 23:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.143 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:34.143 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:34.403 true 00:34:34.403 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:34.403 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:34.664 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.664 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:34.664 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:34.924 true 00:34:34.924 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:34.924 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.184 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.445 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:35.445 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:35.445 true 00:34:35.445 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:35.445 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.705 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.966 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:35.966 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:35.966 true 00:34:35.966 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:35.966 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.226 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:36.486 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:36.486 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:36.486 true 00:34:36.746 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:36.746 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.746 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.006 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:37.006 23:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:37.267 true 00:34:37.267 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:37.267 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.267 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.527 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:37.527 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:37.787 true 00:34:37.787 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:37.787 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.047 23:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.047 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:38.047 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:38.307 true 00:34:38.307 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:38.307 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.567 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.567 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:38.567 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:38.827 true 00:34:38.827 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:38.827 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.087 23:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.347 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:39.347 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:39.347 true 00:34:39.347 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:39.347 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.607 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.867 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:39.867 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:39.867 true 00:34:39.867 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:39.867 23:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.127 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:40.388 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:40.388 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:40.388 true 00:34:40.388 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:40.388 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.649 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:40.910 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:40.910 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:40.910 true 00:34:41.171 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:41.171 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.171 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:41.432 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:34:41.432 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:34:41.693 true 00:34:41.693 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:41.693 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.693 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:41.953 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:34:41.953 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:34:42.212 true 00:34:42.212 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:42.212 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.472 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:42.472 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:34:42.472 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:34:42.732 true 00:34:42.732 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:42.732 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.993 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:42.993 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:34:42.993 23:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:34:43.253 true 00:34:43.253 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:43.253 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.513 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:43.513 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:34:43.513 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:34:43.773 true 00:34:43.774 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:43.774 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.034 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.296 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:34:44.296 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:34:44.296 true 00:34:44.296 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:44.296 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.556 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.816 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:34:44.816 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:34:44.816 true 00:34:44.816 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:44.816 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.076 23:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:45.337 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:34:45.337 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:34:45.337 true 00:34:45.337 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:45.337 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.597 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:45.858 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:34:45.858 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:34:46.119 true 00:34:46.119 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:46.119 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:46.119 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:46.380 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:34:46.380 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:34:46.640 true 00:34:46.640 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:46.640 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:46.900 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:46.900 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:34:46.900 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:34:47.161 true 00:34:47.161 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:47.161 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.422 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:47.422 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:34:47.422 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:34:47.683 true 00:34:47.683 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:47.683 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.943 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.203 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:34:48.203 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:34:48.203 true 00:34:48.203 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:48.203 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.463 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.724 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:34:48.724 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:34:48.724 true 00:34:48.725 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:48.725 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.986 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.246 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:34:49.246 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:34:49.246 true 00:34:49.246 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:49.246 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.507 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.768 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:34:49.768 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:34:49.768 true 00:34:50.028 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:50.028 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.028 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:50.288 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:34:50.288 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:34:50.549 true 00:34:50.549 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:50.549 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.549 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:50.810 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:34:50.810 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:34:51.070 true 00:34:51.070 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:51.070 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.330 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.330 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:34:51.330 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:34:51.590 true 00:34:51.590 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:51.590 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.850 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.850 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:34:51.850 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:34:52.110 true 00:34:52.110 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:52.110 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.369 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:52.630 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:34:52.630 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:34:52.630 true 00:34:52.630 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:52.630 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.891 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.151 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:34:53.151 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:34:53.151 true 00:34:53.151 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:53.151 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.412 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.672 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:34:53.672 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:34:53.672 true 00:34:53.933 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:53.933 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.933 23:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:54.193 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:34:54.193 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:34:54.454 true 00:34:54.454 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:54.454 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.454 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:54.715 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:34:54.715 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:34:54.974 true 00:34:54.974 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:54.974 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.974 Initializing NVMe Controllers 00:34:54.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:54.974 Controller IO queue size 128, less than required. 00:34:54.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:54.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:54.974 Initialization complete. Launching workers. 00:34:54.974 ======================================================== 00:34:54.974 Latency(us) 00:34:54.974 Device Information : IOPS MiB/s Average min max 00:34:54.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30460.44 14.87 4202.14 1093.53 11522.72 00:34:54.974 ======================================================== 00:34:54.974 Total : 30460.44 14.87 4202.14 1093.53 11522.72 00:34:54.974 00:34:55.234 23:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:55.234 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:34:55.234 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:34:55.494 true 00:34:55.494 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 911364 00:34:55.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (911364) - No such process 00:34:55.494 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 911364 00:34:55.494 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:55.755 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:56.015 null0 00:34:56.015 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.015 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.015 23:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:56.276 null1 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:56.276 null2 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.276 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:56.537 null3 00:34:56.537 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.537 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.537 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:56.797 null4 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:56.797 null5 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:56.797 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:57.057 null6 00:34:57.057 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:57.057 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:57.057 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:57.319 null7 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 918153 918154 918156 918158 918160 918162 918164 918166 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:57.319 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:57.320 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.580 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.581 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:57.841 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:57.841 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:57.842 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:58.103 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.103 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.104 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.104 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:58.365 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.366 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.625 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:58.884 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.884 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.884 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:58.884 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.884 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:58.885 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.144 23:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.144 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:59.145 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:59.406 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.407 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.407 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:59.407 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.407 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.407 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:59.667 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.668 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:59.928 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:00.188 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:00.188 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.447 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:00.448 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.707 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:00.967 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:01.227 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.227 rmmod nvme_tcp 00:35:01.227 rmmod nvme_fabrics 00:35:01.227 rmmod nvme_keyring 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 910741 ']' 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 910741 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 910741 ']' 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 910741 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 910741 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 910741' 00:35:01.227 killing process with pid 910741 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 910741 00:35:01.227 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 910741 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.487 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.392 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.392 00:35:03.392 real 0m49.247s 00:35:03.392 user 3m2.137s 00:35:03.392 sys 0m23.087s 00:35:03.392 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:03.392 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:03.392 ************************************ 00:35:03.392 END TEST nvmf_ns_hotplug_stress 00:35:03.392 ************************************ 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.653 ************************************ 00:35:03.653 START TEST nvmf_delete_subsystem 00:35:03.653 ************************************ 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:03.653 * Looking for test storage... 00:35:03.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:03.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.653 --rc genhtml_branch_coverage=1 00:35:03.653 --rc genhtml_function_coverage=1 00:35:03.653 --rc genhtml_legend=1 00:35:03.653 --rc geninfo_all_blocks=1 00:35:03.653 --rc geninfo_unexecuted_blocks=1 00:35:03.653 00:35:03.653 ' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:03.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.653 --rc genhtml_branch_coverage=1 00:35:03.653 --rc genhtml_function_coverage=1 00:35:03.653 --rc genhtml_legend=1 00:35:03.653 --rc geninfo_all_blocks=1 00:35:03.653 --rc geninfo_unexecuted_blocks=1 00:35:03.653 00:35:03.653 ' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:03.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.653 --rc genhtml_branch_coverage=1 00:35:03.653 --rc genhtml_function_coverage=1 00:35:03.653 --rc genhtml_legend=1 00:35:03.653 --rc geninfo_all_blocks=1 00:35:03.653 --rc geninfo_unexecuted_blocks=1 00:35:03.653 00:35:03.653 ' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:03.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.653 --rc genhtml_branch_coverage=1 00:35:03.653 --rc genhtml_function_coverage=1 00:35:03.653 --rc genhtml_legend=1 00:35:03.653 --rc geninfo_all_blocks=1 00:35:03.653 --rc geninfo_unexecuted_blocks=1 00:35:03.653 00:35:03.653 ' 00:35:03.653 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.915 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:12.071 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:12.071 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:12.071 Found net devices under 0000:31:00.0: cvl_0_0 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:12.071 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:12.072 Found net devices under 0000:31:00.1: cvl_0_1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:35:12.072 00:35:12.072 --- 10.0.0.2 ping statistics --- 00:35:12.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.072 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:35:12.072 00:35:12.072 --- 10.0.0.1 ping statistics --- 00:35:12.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.072 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=923376 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 923376 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 923376 ']' 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 [2024-09-30 23:03:38.449137] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:12.072 [2024-09-30 23:03:38.450273] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:35:12.072 [2024-09-30 23:03:38.450319] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.072 [2024-09-30 23:03:38.519743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:12.072 [2024-09-30 23:03:38.605805] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.072 [2024-09-30 23:03:38.605865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.072 [2024-09-30 23:03:38.605871] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.072 [2024-09-30 23:03:38.605877] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.072 [2024-09-30 23:03:38.605882] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.072 [2024-09-30 23:03:38.605963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.072 [2024-09-30 23:03:38.605975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.072 [2024-09-30 23:03:38.677845] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:12.072 [2024-09-30 23:03:38.678143] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:12.072 [2024-09-30 23:03:38.678542] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 [2024-09-30 23:03:38.766957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.072 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.072 [2024-09-30 23:03:38.803462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 NULL1 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 Delay0 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=923396 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:12.073 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:12.073 [2024-09-30 23:03:38.913881] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:13.987 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:13.987 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.987 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 starting I/O failed: -6 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 [2024-09-30 23:03:41.039772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b390 is same with the state(6) to be set 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Write completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.248 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 starting I/O failed: -6 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 [2024-09-30 23:03:41.043577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ccc00d640 is same with the state(6) to be set 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Write completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:14.249 Read completed with error (sct=0, sc=8) 00:35:15.191 [2024-09-30 23:03:42.016338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227c6b0 is same with the state(6) to be set 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 [2024-09-30 23:03:42.043288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b1b0 is same with the state(6) to be set 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 [2024-09-30 23:03:42.043820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b6c0 is same with the state(6) to be set 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 [2024-09-30 23:03:42.044988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ccc00d310 is same with the state(6) to be set 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Write completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 Read completed with error (sct=0, sc=8) 00:35:15.191 [2024-09-30 23:03:42.045263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ccc000c00 is same with the state(6) to be set 00:35:15.191 Initializing NVMe Controllers 00:35:15.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:15.191 Controller IO queue size 128, less than required. 00:35:15.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:15.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:15.191 Initialization complete. Launching workers. 00:35:15.191 ======================================================== 00:35:15.191 Latency(us) 00:35:15.191 Device Information : IOPS MiB/s Average min max 00:35:15.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.64 0.08 888606.98 394.12 1007344.42 00:35:15.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.73 0.08 933971.35 302.99 1010995.41 00:35:15.191 ======================================================== 00:35:15.191 Total : 326.37 0.16 909975.26 302.99 1010995.41 00:35:15.191 00:35:15.191 [2024-09-30 23:03:42.045676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227c6b0 (9): Bad file descriptor 00:35:15.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:15.191 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.191 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:15.191 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 923396 00:35:15.191 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 923396 00:35:15.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (923396) - No such process 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 923396 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 923396 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 923396 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.762 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:15.763 [2024-09-30 23:03:42.579298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=924073 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:15.763 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:15.763 [2024-09-30 23:03:42.665545] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:16.332 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:16.332 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:16.332 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:16.592 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:16.592 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:16.592 23:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:17.163 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:17.163 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:17.163 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:17.733 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:17.733 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:17.733 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:18.302 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:18.302 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:18.302 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:18.872 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:18.872 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:18.872 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:18.872 Initializing NVMe Controllers 00:35:18.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:18.872 Controller IO queue size 128, less than required. 00:35:18.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:18.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:18.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:18.872 Initialization complete. Launching workers. 00:35:18.872 ======================================================== 00:35:18.872 Latency(us) 00:35:18.872 Device Information : IOPS MiB/s Average min max 00:35:18.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001994.77 1000160.22 1006263.79 00:35:18.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003854.94 1000252.55 1041877.58 00:35:18.872 ======================================================== 00:35:18.872 Total : 256.00 0.12 1002924.86 1000160.22 1041877.58 00:35:18.872 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 924073 00:35:19.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (924073) - No such process 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 924073 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.132 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.132 rmmod nvme_tcp 00:35:19.391 rmmod nvme_fabrics 00:35:19.392 rmmod nvme_keyring 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 923376 ']' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 923376 ']' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923376' 00:35:19.392 killing process with pid 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 923376 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.392 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.939 00:35:21.939 real 0m18.017s 00:35:21.939 user 0m26.384s 00:35:21.939 sys 0m7.719s 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:21.939 ************************************ 00:35:21.939 END TEST nvmf_delete_subsystem 00:35:21.939 ************************************ 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:21.939 ************************************ 00:35:21.939 START TEST nvmf_host_management 00:35:21.939 ************************************ 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:21.939 * Looking for test storage... 00:35:21.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.939 --rc genhtml_branch_coverage=1 00:35:21.939 --rc genhtml_function_coverage=1 00:35:21.939 --rc genhtml_legend=1 00:35:21.939 --rc geninfo_all_blocks=1 00:35:21.939 --rc geninfo_unexecuted_blocks=1 00:35:21.939 00:35:21.939 ' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.939 --rc genhtml_branch_coverage=1 00:35:21.939 --rc genhtml_function_coverage=1 00:35:21.939 --rc genhtml_legend=1 00:35:21.939 --rc geninfo_all_blocks=1 00:35:21.939 --rc geninfo_unexecuted_blocks=1 00:35:21.939 00:35:21.939 ' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.939 --rc genhtml_branch_coverage=1 00:35:21.939 --rc genhtml_function_coverage=1 00:35:21.939 --rc genhtml_legend=1 00:35:21.939 --rc geninfo_all_blocks=1 00:35:21.939 --rc geninfo_unexecuted_blocks=1 00:35:21.939 00:35:21.939 ' 00:35:21.939 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.939 --rc genhtml_branch_coverage=1 00:35:21.939 --rc genhtml_function_coverage=1 00:35:21.939 --rc genhtml_legend=1 00:35:21.939 --rc geninfo_all_blocks=1 00:35:21.939 --rc geninfo_unexecuted_blocks=1 00:35:21.939 00:35:21.939 ' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.940 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:30.080 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:30.081 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:30.081 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:30.081 Found net devices under 0000:31:00.0: cvl_0_0 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:30.081 Found net devices under 0000:31:00.1: cvl_0_1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:30.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:30.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:35:30.081 00:35:30.081 --- 10.0.0.2 ping statistics --- 00:35:30.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.081 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:30.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:30.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:35:30.081 00:35:30.081 --- 10.0.0.1 ping statistics --- 00:35:30.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.081 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=929130 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 929130 00:35:30.081 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 929130 ']' 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.082 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.082 [2024-09-30 23:03:56.581253] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:30.082 [2024-09-30 23:03:56.582420] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:35:30.082 [2024-09-30 23:03:56.582473] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.082 [2024-09-30 23:03:56.670982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:30.082 [2024-09-30 23:03:56.766173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.082 [2024-09-30 23:03:56.766231] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.082 [2024-09-30 23:03:56.766240] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.082 [2024-09-30 23:03:56.766248] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.082 [2024-09-30 23:03:56.766254] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.082 [2024-09-30 23:03:56.766420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:30.082 [2024-09-30 23:03:56.766580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:30.082 [2024-09-30 23:03:56.766744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.082 [2024-09-30 23:03:56.766744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:30.082 [2024-09-30 23:03:56.853092] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:30.082 [2024-09-30 23:03:56.854011] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:30.082 [2024-09-30 23:03:56.854304] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:30.082 [2024-09-30 23:03:56.854827] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:30.082 [2024-09-30 23:03:56.854872] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.654 [2024-09-30 23:03:57.431602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.654 Malloc0 00:35:30.654 [2024-09-30 23:03:57.519842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=929281 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 929281 /var/tmp/bdevperf.sock 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 929281 ']' 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:30.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:30.654 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:30.655 { 00:35:30.655 "params": { 00:35:30.655 "name": "Nvme$subsystem", 00:35:30.655 "trtype": "$TEST_TRANSPORT", 00:35:30.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.655 "adrfam": "ipv4", 00:35:30.655 "trsvcid": "$NVMF_PORT", 00:35:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.655 "hdgst": ${hdgst:-false}, 00:35:30.655 "ddgst": ${ddgst:-false} 00:35:30.655 }, 00:35:30.655 "method": "bdev_nvme_attach_controller" 00:35:30.655 } 00:35:30.655 EOF 00:35:30.655 )") 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:35:30.655 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:30.655 "params": { 00:35:30.655 "name": "Nvme0", 00:35:30.655 "trtype": "tcp", 00:35:30.655 "traddr": "10.0.0.2", 00:35:30.655 "adrfam": "ipv4", 00:35:30.655 "trsvcid": "4420", 00:35:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.655 "hdgst": false, 00:35:30.655 "ddgst": false 00:35:30.655 }, 00:35:30.655 "method": "bdev_nvme_attach_controller" 00:35:30.655 }' 00:35:30.655 [2024-09-30 23:03:57.642624] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:35:30.655 [2024-09-30 23:03:57.642694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929281 ] 00:35:30.983 [2024-09-30 23:03:57.727603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.983 [2024-09-30 23:03:57.824558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.302 Running I/O for 10 seconds... 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=779 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 779 -ge 100 ']' 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.644 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:31.644 [2024-09-30 23:03:58.533163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:31.644 [2024-09-30 23:03:58.533227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.533240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:31.644 [2024-09-30 23:03:58.533248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.533258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:31.644 [2024-09-30 23:03:58.533265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.533274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:31.644 [2024-09-30 23:03:58.533281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.533289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3f2a0 is same with the state(6) to be set 00:35:31.644 [2024-09-30 23:03:58.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-09-30 23:03:58.535806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-09-30 23:03:58.535820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.535976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.535992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.645 [2024-09-30 23:03:58.536049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:31.645 [2024-09-30 23:03:58.536276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.645 [2024-09-30 23:03:58.536426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 [2024-09-30 23:03:58.536544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-09-30 23:03:58.536828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-09-30 23:03:58.536836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.646 [2024-09-30 23:03:58.536845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.646 [2024-09-30 23:03:58.536852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.646 [2024-09-30 23:03:58.536862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.646 [2024-09-30 23:03:58.536869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.646 [2024-09-30 23:03:58.536881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.646 [2024-09-30 23:03:58.536891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.646 [2024-09-30 23:03:58.536920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.646 [2024-09-30 23:03:58.536928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.646 [2024-09-30 23:03:58.536937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57f60 is same with the state(6) to be set 00:35:31.646 [2024-09-30 23:03:58.537016] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc57f60 was disconnected and freed. reset controller. 00:35:31.646 [2024-09-30 23:03:58.538299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:31.646 task offset: 114560 on job bdev=Nvme0n1 fails 00:35:31.646 00:35:31.646 Latency(us) 00:35:31.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:31.646 Job: Nvme0n1 ended in about 0.51 seconds with error 00:35:31.646 Verification LBA range: start 0x0 length 0x400 00:35:31.646 Nvme0n1 : 0.51 1660.25 103.77 126.50 0.00 34815.28 3549.87 34515.63 00:35:31.646 =================================================================================================================== 00:35:31.646 Total : 1660.25 103.77 126.50 0.00 34815.28 3549.87 34515.63 00:35:31.646 [2024-09-30 23:03:58.540486] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:31.646 [2024-09-30 23:03:58.540523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3f2a0 (9): Bad file descriptor 00:35:31.646 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.646 23:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:31.907 [2024-09-30 23:03:58.675146] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 929281 00:35:32.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (929281) - No such process 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:32.861 { 00:35:32.861 "params": { 00:35:32.861 "name": "Nvme$subsystem", 00:35:32.861 "trtype": "$TEST_TRANSPORT", 00:35:32.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.861 "adrfam": "ipv4", 00:35:32.861 "trsvcid": "$NVMF_PORT", 00:35:32.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.861 "hdgst": ${hdgst:-false}, 00:35:32.861 "ddgst": ${ddgst:-false} 00:35:32.861 }, 00:35:32.861 "method": "bdev_nvme_attach_controller" 00:35:32.861 } 00:35:32.861 EOF 00:35:32.861 )") 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:35:32.861 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:32.861 "params": { 00:35:32.861 "name": "Nvme0", 00:35:32.861 "trtype": "tcp", 00:35:32.861 "traddr": "10.0.0.2", 00:35:32.861 "adrfam": "ipv4", 00:35:32.861 "trsvcid": "4420", 00:35:32.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.861 "hdgst": false, 00:35:32.861 "ddgst": false 00:35:32.861 }, 00:35:32.861 "method": "bdev_nvme_attach_controller" 00:35:32.861 }' 00:35:32.861 [2024-09-30 23:03:59.609395] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:35:32.861 [2024-09-30 23:03:59.609477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929711 ] 00:35:32.861 [2024-09-30 23:03:59.692197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.861 [2024-09-30 23:03:59.788921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.122 Running I/O for 1 seconds... 00:35:34.063 1820.00 IOPS, 113.75 MiB/s 00:35:34.063 Latency(us) 00:35:34.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.063 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:34.063 Verification LBA range: start 0x0 length 0x400 00:35:34.063 Nvme0n1 : 1.02 1857.68 116.11 0.00 0.00 33687.25 4833.28 38884.69 00:35:34.063 =================================================================================================================== 00:35:34.063 Total : 1857.68 116.11 0.00 0.00 33687.25 4833.28 38884.69 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.324 rmmod nvme_tcp 00:35:34.324 rmmod nvme_fabrics 00:35:34.324 rmmod nvme_keyring 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 929130 ']' 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 929130 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 929130 ']' 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 929130 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929130 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929130' 00:35:34.324 killing process with pid 929130 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 929130 00:35:34.324 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 929130 00:35:34.585 [2024-09-30 23:04:01.405128] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.585 23:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.499 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:36.499 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:36.499 00:35:36.499 real 0m14.946s 00:35:36.499 user 0m19.820s 00:35:36.499 sys 0m7.690s 00:35:36.499 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:36.499 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.499 ************************************ 00:35:36.499 END TEST nvmf_host_management 00:35:36.499 ************************************ 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:36.760 ************************************ 00:35:36.760 START TEST nvmf_lvol 00:35:36.760 ************************************ 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:36.760 * Looking for test storage... 00:35:36.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.760 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.022 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:37.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.023 --rc genhtml_branch_coverage=1 00:35:37.023 --rc genhtml_function_coverage=1 00:35:37.023 --rc genhtml_legend=1 00:35:37.023 --rc geninfo_all_blocks=1 00:35:37.023 --rc geninfo_unexecuted_blocks=1 00:35:37.023 00:35:37.023 ' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:37.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.023 --rc genhtml_branch_coverage=1 00:35:37.023 --rc genhtml_function_coverage=1 00:35:37.023 --rc genhtml_legend=1 00:35:37.023 --rc geninfo_all_blocks=1 00:35:37.023 --rc geninfo_unexecuted_blocks=1 00:35:37.023 00:35:37.023 ' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:37.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.023 --rc genhtml_branch_coverage=1 00:35:37.023 --rc genhtml_function_coverage=1 00:35:37.023 --rc genhtml_legend=1 00:35:37.023 --rc geninfo_all_blocks=1 00:35:37.023 --rc geninfo_unexecuted_blocks=1 00:35:37.023 00:35:37.023 ' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:37.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.023 --rc genhtml_branch_coverage=1 00:35:37.023 --rc genhtml_function_coverage=1 00:35:37.023 --rc genhtml_legend=1 00:35:37.023 --rc geninfo_all_blocks=1 00:35:37.023 --rc geninfo_unexecuted_blocks=1 00:35:37.023 00:35:37.023 ' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:37.023 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.024 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:45.165 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:45.165 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.165 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:45.166 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:45.166 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:45.166 23:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:45.166 Found net devices under 0000:31:00.0: cvl_0_0 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:45.166 Found net devices under 0000:31:00.1: cvl_0_1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:35:45.166 00:35:45.166 --- 10.0.0.2 ping statistics --- 00:35:45.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.166 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:35:45.166 00:35:45.166 --- 10.0.0.1 ping statistics --- 00:35:45.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.166 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=934272 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 934272 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 934272 ']' 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:45.166 23:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:45.166 [2024-09-30 23:04:11.405483] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.166 [2024-09-30 23:04:11.406465] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:35:45.166 [2024-09-30 23:04:11.406502] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.166 [2024-09-30 23:04:11.490522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:45.166 [2024-09-30 23:04:11.557419] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.166 [2024-09-30 23:04:11.557457] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.166 [2024-09-30 23:04:11.557465] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.166 [2024-09-30 23:04:11.557471] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.166 [2024-09-30 23:04:11.557477] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.166 [2024-09-30 23:04:11.557613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.166 [2024-09-30 23:04:11.557762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.166 [2024-09-30 23:04:11.557763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.166 [2024-09-30 23:04:11.620786] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.166 [2024-09-30 23:04:11.620808] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.166 [2024-09-30 23:04:11.621576] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.166 [2024-09-30 23:04:11.621681] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:45.427 [2024-09-30 23:04:12.394635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.427 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:45.687 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:45.687 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:45.947 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:45.947 23:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:46.207 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:46.467 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c3d89db0-18f5-418b-959f-b0589a024d10 00:35:46.467 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3d89db0-18f5-418b-959f-b0589a024d10 lvol 20 00:35:46.467 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=40074892-2708-4fbe-9b3a-7b8db73aba85 00:35:46.467 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:46.726 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40074892-2708-4fbe-9b3a-7b8db73aba85 00:35:46.726 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:46.986 [2024-09-30 23:04:13.862433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.986 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:47.245 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=934834 00:35:47.245 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:47.245 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:48.185 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 40074892-2708-4fbe-9b3a-7b8db73aba85 MY_SNAPSHOT 00:35:48.446 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f3d2512-0838-4dc0-9a39-075d2fd7b03e 00:35:48.446 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 40074892-2708-4fbe-9b3a-7b8db73aba85 30 00:35:48.707 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f3d2512-0838-4dc0-9a39-075d2fd7b03e MY_CLONE 00:35:48.967 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fea1eb67-4708-40d3-b841-f01e77567091 00:35:48.967 23:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fea1eb67-4708-40d3-b841-f01e77567091 00:35:49.227 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 934834 00:35:57.363 Initializing NVMe Controllers 00:35:57.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:57.363 Controller IO queue size 128, less than required. 00:35:57.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:57.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:57.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:57.363 Initialization complete. Launching workers. 00:35:57.363 ======================================================== 00:35:57.363 Latency(us) 00:35:57.363 Device Information : IOPS MiB/s Average min max 00:35:57.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14965.30 58.46 8555.94 1941.97 81052.19 00:35:57.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15598.10 60.93 8205.51 490.02 83917.99 00:35:57.363 ======================================================== 00:35:57.363 Total : 30563.40 119.39 8377.10 490.02 83917.99 00:35:57.363 00:35:57.363 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.624 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40074892-2708-4fbe-9b3a-7b8db73aba85 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3d89db0-18f5-418b-959f-b0589a024d10 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.885 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.885 rmmod nvme_tcp 00:35:57.885 rmmod nvme_fabrics 00:35:58.145 rmmod nvme_keyring 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 934272 ']' 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 934272 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 934272 ']' 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 934272 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:58.145 23:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934272 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934272' 00:35:58.145 killing process with pid 934272 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 934272 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 934272 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.145 23:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.687 00:36:00.687 real 0m23.631s 00:36:00.687 user 0m55.311s 00:36:00.687 sys 0m10.616s 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:00.687 ************************************ 00:36:00.687 END TEST nvmf_lvol 00:36:00.687 ************************************ 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:00.687 ************************************ 00:36:00.687 START TEST nvmf_lvs_grow 00:36:00.687 ************************************ 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:00.687 * Looking for test storage... 00:36:00.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:00.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.687 --rc genhtml_branch_coverage=1 00:36:00.687 --rc genhtml_function_coverage=1 00:36:00.687 --rc genhtml_legend=1 00:36:00.687 --rc geninfo_all_blocks=1 00:36:00.687 --rc geninfo_unexecuted_blocks=1 00:36:00.687 00:36:00.687 ' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:00.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.687 --rc genhtml_branch_coverage=1 00:36:00.687 --rc genhtml_function_coverage=1 00:36:00.687 --rc genhtml_legend=1 00:36:00.687 --rc geninfo_all_blocks=1 00:36:00.687 --rc geninfo_unexecuted_blocks=1 00:36:00.687 00:36:00.687 ' 00:36:00.687 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:00.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.687 --rc genhtml_branch_coverage=1 00:36:00.687 --rc genhtml_function_coverage=1 00:36:00.687 --rc genhtml_legend=1 00:36:00.687 --rc geninfo_all_blocks=1 00:36:00.687 --rc geninfo_unexecuted_blocks=1 00:36:00.688 00:36:00.688 ' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:00.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.688 --rc genhtml_branch_coverage=1 00:36:00.688 --rc genhtml_function_coverage=1 00:36:00.688 --rc genhtml_legend=1 00:36:00.688 --rc geninfo_all_blocks=1 00:36:00.688 --rc geninfo_unexecuted_blocks=1 00:36:00.688 00:36:00.688 ' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.688 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.689 23:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:08.821 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:08.821 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:08.821 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:08.822 Found net devices under 0000:31:00.0: cvl_0_0 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:08.822 Found net devices under 0000:31:00.1: cvl_0_1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:36:08.822 00:36:08.822 --- 10.0.0.2 ping statistics --- 00:36:08.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.822 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:36:08.822 00:36:08.822 --- 10.0.0.1 ping statistics --- 00:36:08.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.822 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=941054 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 941054 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 941054 ']' 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.822 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:08.822 [2024-09-30 23:04:34.945471] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:08.822 [2024-09-30 23:04:34.946456] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:08.822 [2024-09-30 23:04:34.946495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.822 [2024-09-30 23:04:35.028661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.822 [2024-09-30 23:04:35.103624] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.822 [2024-09-30 23:04:35.103676] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.822 [2024-09-30 23:04:35.103685] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.822 [2024-09-30 23:04:35.103693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.822 [2024-09-30 23:04:35.103698] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.822 [2024-09-30 23:04:35.103723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.822 [2024-09-30 23:04:35.172730] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:08.822 [2024-09-30 23:04:35.173026] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:08.822 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:08.822 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:36:08.822 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:08.822 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.822 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:08.823 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.823 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:09.083 [2024-09-30 23:04:35.952543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.083 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:36:09.083 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:09.083 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:09.083 23:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:09.083 ************************************ 00:36:09.083 START TEST lvs_grow_clean 00:36:09.083 ************************************ 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:09.083 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:09.344 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:09.344 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:09.604 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:09.605 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:09.605 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:09.605 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:09.605 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:09.605 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc lvol 150 00:36:09.865 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c254ac28-7342-478c-afaf-63799d383f5c 00:36:09.865 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:09.865 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:10.125 [2024-09-30 23:04:36.896218] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:10.125 [2024-09-30 23:04:36.896364] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:10.125 true 00:36:10.125 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:10.125 23:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:10.125 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:10.125 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:10.385 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c254ac28-7342-478c-afaf-63799d383f5c 00:36:10.646 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.646 [2024-09-30 23:04:37.576880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.646 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=941753 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 941753 /var/tmp/bdevperf.sock 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 941753 ']' 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:10.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:10.906 23:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.906 [2024-09-30 23:04:37.823243] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:10.906 [2024-09-30 23:04:37.823321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941753 ] 00:36:10.906 [2024-09-30 23:04:37.906289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.166 [2024-09-30 23:04:37.999376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.737 23:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:11.737 23:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:36:11.737 23:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:11.998 Nvme0n1 00:36:11.998 23:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:12.259 [ 00:36:12.259 { 00:36:12.259 "name": "Nvme0n1", 00:36:12.259 "aliases": [ 00:36:12.259 "c254ac28-7342-478c-afaf-63799d383f5c" 00:36:12.259 ], 00:36:12.259 "product_name": "NVMe disk", 00:36:12.259 "block_size": 4096, 00:36:12.259 "num_blocks": 38912, 00:36:12.259 "uuid": "c254ac28-7342-478c-afaf-63799d383f5c", 00:36:12.259 "numa_id": 0, 00:36:12.259 "assigned_rate_limits": { 00:36:12.259 "rw_ios_per_sec": 0, 00:36:12.259 "rw_mbytes_per_sec": 0, 00:36:12.259 "r_mbytes_per_sec": 0, 00:36:12.259 "w_mbytes_per_sec": 0 00:36:12.259 }, 00:36:12.259 "claimed": false, 00:36:12.259 "zoned": false, 00:36:12.259 "supported_io_types": { 00:36:12.259 "read": true, 00:36:12.259 "write": true, 00:36:12.259 "unmap": true, 00:36:12.259 "flush": true, 00:36:12.259 "reset": true, 00:36:12.259 "nvme_admin": true, 00:36:12.259 "nvme_io": true, 00:36:12.259 "nvme_io_md": false, 00:36:12.259 "write_zeroes": true, 00:36:12.259 "zcopy": false, 00:36:12.259 "get_zone_info": false, 00:36:12.259 "zone_management": false, 00:36:12.259 "zone_append": false, 00:36:12.259 "compare": true, 00:36:12.259 "compare_and_write": true, 00:36:12.259 "abort": true, 00:36:12.259 "seek_hole": false, 00:36:12.259 "seek_data": false, 00:36:12.259 "copy": true, 00:36:12.259 "nvme_iov_md": false 00:36:12.259 }, 00:36:12.259 "memory_domains": [ 00:36:12.259 { 00:36:12.259 "dma_device_id": "system", 00:36:12.259 "dma_device_type": 1 00:36:12.259 } 00:36:12.259 ], 00:36:12.259 "driver_specific": { 00:36:12.259 "nvme": [ 00:36:12.259 { 00:36:12.259 "trid": { 00:36:12.259 "trtype": "TCP", 00:36:12.259 "adrfam": "IPv4", 00:36:12.259 "traddr": "10.0.0.2", 00:36:12.259 "trsvcid": "4420", 00:36:12.259 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:12.259 }, 00:36:12.259 "ctrlr_data": { 00:36:12.259 "cntlid": 1, 00:36:12.259 "vendor_id": "0x8086", 00:36:12.259 "model_number": "SPDK bdev Controller", 00:36:12.259 "serial_number": "SPDK0", 00:36:12.259 "firmware_revision": "25.01", 00:36:12.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.259 "oacs": { 00:36:12.259 "security": 0, 00:36:12.259 "format": 0, 00:36:12.259 "firmware": 0, 00:36:12.259 "ns_manage": 0 00:36:12.259 }, 00:36:12.259 "multi_ctrlr": true, 00:36:12.259 "ana_reporting": false 00:36:12.259 }, 00:36:12.259 "vs": { 00:36:12.259 "nvme_version": "1.3" 00:36:12.259 }, 00:36:12.259 "ns_data": { 00:36:12.259 "id": 1, 00:36:12.259 "can_share": true 00:36:12.259 } 00:36:12.259 } 00:36:12.259 ], 00:36:12.259 "mp_policy": "active_passive" 00:36:12.259 } 00:36:12.259 } 00:36:12.259 ] 00:36:12.259 23:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=942020 00:36:12.259 23:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:12.259 23:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:12.259 Running I/O for 10 seconds... 00:36:13.199 Latency(us) 00:36:13.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:13.199 Nvme0n1 : 1.00 17143.00 66.96 0.00 0.00 0.00 0.00 0.00 00:36:13.199 =================================================================================================================== 00:36:13.200 Total : 17143.00 66.96 0.00 0.00 0.00 0.00 0.00 00:36:13.200 00:36:14.140 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:14.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:14.140 Nvme0n1 : 2.00 17339.50 67.73 0.00 0.00 0.00 0.00 0.00 00:36:14.140 =================================================================================================================== 00:36:14.140 Total : 17339.50 67.73 0.00 0.00 0.00 0.00 0.00 00:36:14.140 00:36:14.400 true 00:36:14.400 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:14.400 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:14.660 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:14.660 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:14.660 23:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 942020 00:36:15.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:15.230 Nvme0n1 : 3.00 17522.67 68.45 0.00 0.00 0.00 0.00 0.00 00:36:15.230 =================================================================================================================== 00:36:15.230 Total : 17522.67 68.45 0.00 0.00 0.00 0.00 0.00 00:36:15.230 00:36:16.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:16.172 Nvme0n1 : 4.00 17613.50 68.80 0.00 0.00 0.00 0.00 0.00 00:36:16.172 =================================================================================================================== 00:36:16.172 Total : 17613.50 68.80 0.00 0.00 0.00 0.00 0.00 00:36:16.172 00:36:17.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:17.632 Nvme0n1 : 5.00 19134.00 74.74 0.00 0.00 0.00 0.00 0.00 00:36:17.632 =================================================================================================================== 00:36:17.632 Total : 19134.00 74.74 0.00 0.00 0.00 0.00 0.00 00:36:17.632 00:36:18.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:18.203 Nvme0n1 : 6.00 20201.17 78.91 0.00 0.00 0.00 0.00 0.00 00:36:18.203 =================================================================================================================== 00:36:18.203 Total : 20201.17 78.91 0.00 0.00 0.00 0.00 0.00 00:36:18.203 00:36:19.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:19.144 Nvme0n1 : 7.00 20963.14 81.89 0.00 0.00 0.00 0.00 0.00 00:36:19.144 =================================================================================================================== 00:36:19.144 Total : 20963.14 81.89 0.00 0.00 0.00 0.00 0.00 00:36:19.144 00:36:20.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.528 Nvme0n1 : 8.00 21542.62 84.15 0.00 0.00 0.00 0.00 0.00 00:36:20.528 =================================================================================================================== 00:36:20.528 Total : 21542.62 84.15 0.00 0.00 0.00 0.00 0.00 00:36:20.528 00:36:21.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.469 Nvme0n1 : 9.00 21993.44 85.91 0.00 0.00 0.00 0.00 0.00 00:36:21.469 =================================================================================================================== 00:36:21.469 Total : 21993.44 85.91 0.00 0.00 0.00 0.00 0.00 00:36:21.469 00:36:22.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:22.411 Nvme0n1 : 10.00 22354.10 87.32 0.00 0.00 0.00 0.00 0.00 00:36:22.411 =================================================================================================================== 00:36:22.411 Total : 22354.10 87.32 0.00 0.00 0.00 0.00 0.00 00:36:22.411 00:36:22.411 00:36:22.411 Latency(us) 00:36:22.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:22.411 Nvme0n1 : 10.01 22355.18 87.32 0.00 0.00 5722.26 4259.84 31457.28 00:36:22.411 =================================================================================================================== 00:36:22.411 Total : 22355.18 87.32 0.00 0.00 5722.26 4259.84 31457.28 00:36:22.411 { 00:36:22.411 "results": [ 00:36:22.411 { 00:36:22.411 "job": "Nvme0n1", 00:36:22.411 "core_mask": "0x2", 00:36:22.411 "workload": "randwrite", 00:36:22.411 "status": "finished", 00:36:22.411 "queue_depth": 128, 00:36:22.411 "io_size": 4096, 00:36:22.411 "runtime": 10.005241, 00:36:22.411 "iops": 22355.183648249953, 00:36:22.411 "mibps": 87.32493612597638, 00:36:22.411 "io_failed": 0, 00:36:22.411 "io_timeout": 0, 00:36:22.411 "avg_latency_us": 5722.2566159816515, 00:36:22.411 "min_latency_us": 4259.84, 00:36:22.411 "max_latency_us": 31457.28 00:36:22.411 } 00:36:22.411 ], 00:36:22.411 "core_count": 1 00:36:22.411 } 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 941753 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 941753 ']' 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 941753 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 941753 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 941753' 00:36:22.411 killing process with pid 941753 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 941753 00:36:22.411 Received shutdown signal, test time was about 10.000000 seconds 00:36:22.411 00:36:22.411 Latency(us) 00:36:22.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.411 =================================================================================================================== 00:36:22.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 941753 00:36:22.411 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:22.672 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.931 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:22.932 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:22.932 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:22.932 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:22.932 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:23.192 [2024-09-30 23:04:50.068290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:23.192 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:23.453 request: 00:36:23.453 { 00:36:23.453 "uuid": "4cb47dad-f6cf-47f6-b467-67ce6c30a6fc", 00:36:23.453 "method": "bdev_lvol_get_lvstores", 00:36:23.453 "req_id": 1 00:36:23.453 } 00:36:23.453 Got JSON-RPC error response 00:36:23.453 response: 00:36:23.453 { 00:36:23.453 "code": -19, 00:36:23.453 "message": "No such device" 00:36:23.453 } 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:23.453 aio_bdev 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c254ac28-7342-478c-afaf-63799d383f5c 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c254ac28-7342-478c-afaf-63799d383f5c 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:23.453 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:23.714 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c254ac28-7342-478c-afaf-63799d383f5c -t 2000 00:36:23.975 [ 00:36:23.975 { 00:36:23.975 "name": "c254ac28-7342-478c-afaf-63799d383f5c", 00:36:23.975 "aliases": [ 00:36:23.975 "lvs/lvol" 00:36:23.975 ], 00:36:23.975 "product_name": "Logical Volume", 00:36:23.975 "block_size": 4096, 00:36:23.975 "num_blocks": 38912, 00:36:23.976 "uuid": "c254ac28-7342-478c-afaf-63799d383f5c", 00:36:23.976 "assigned_rate_limits": { 00:36:23.976 "rw_ios_per_sec": 0, 00:36:23.976 "rw_mbytes_per_sec": 0, 00:36:23.976 "r_mbytes_per_sec": 0, 00:36:23.976 "w_mbytes_per_sec": 0 00:36:23.976 }, 00:36:23.976 "claimed": false, 00:36:23.976 "zoned": false, 00:36:23.976 "supported_io_types": { 00:36:23.976 "read": true, 00:36:23.976 "write": true, 00:36:23.976 "unmap": true, 00:36:23.976 "flush": false, 00:36:23.976 "reset": true, 00:36:23.976 "nvme_admin": false, 00:36:23.976 "nvme_io": false, 00:36:23.976 "nvme_io_md": false, 00:36:23.976 "write_zeroes": true, 00:36:23.976 "zcopy": false, 00:36:23.976 "get_zone_info": false, 00:36:23.976 "zone_management": false, 00:36:23.976 "zone_append": false, 00:36:23.976 "compare": false, 00:36:23.976 "compare_and_write": false, 00:36:23.976 "abort": false, 00:36:23.976 "seek_hole": true, 00:36:23.976 "seek_data": true, 00:36:23.976 "copy": false, 00:36:23.976 "nvme_iov_md": false 00:36:23.976 }, 00:36:23.976 "driver_specific": { 00:36:23.976 "lvol": { 00:36:23.976 "lvol_store_uuid": "4cb47dad-f6cf-47f6-b467-67ce6c30a6fc", 00:36:23.976 "base_bdev": "aio_bdev", 00:36:23.976 "thin_provision": false, 00:36:23.976 "num_allocated_clusters": 38, 00:36:23.976 "snapshot": false, 00:36:23.976 "clone": false, 00:36:23.976 "esnap_clone": false 00:36:23.976 } 00:36:23.976 } 00:36:23.976 } 00:36:23.976 ] 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:23.976 23:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:24.237 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:24.237 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c254ac28-7342-478c-afaf-63799d383f5c 00:36:24.498 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cb47dad-f6cf-47f6-b467-67ce6c30a6fc 00:36:24.498 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:24.759 00:36:24.759 real 0m15.700s 00:36:24.759 user 0m15.378s 00:36:24.759 sys 0m1.388s 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:24.759 ************************************ 00:36:24.759 END TEST lvs_grow_clean 00:36:24.759 ************************************ 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:24.759 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:25.020 ************************************ 00:36:25.020 START TEST lvs_grow_dirty 00:36:25.020 ************************************ 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:25.020 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:25.020 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:25.020 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:25.281 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:25.281 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:25.281 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 lvol 150 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d3f505d-6566-4c95-9267-c05ab9964463 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:25.541 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:25.802 [2024-09-30 23:04:52.668213] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:25.802 [2024-09-30 23:04:52.668353] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:25.802 true 00:36:25.802 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:25.802 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:26.062 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:26.062 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:26.062 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d3f505d-6566-4c95-9267-c05ab9964463 00:36:26.323 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:26.323 [2024-09-30 23:04:53.328718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=944832 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 944832 /var/tmp/bdevperf.sock 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 944832 ']' 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:26.584 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:26.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:26.585 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:26.585 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:26.585 [2024-09-30 23:04:53.552833] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:26.585 [2024-09-30 23:04:53.552914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944832 ] 00:36:26.845 [2024-09-30 23:04:53.635927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.845 [2024-09-30 23:04:53.695137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.415 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:27.415 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:36:27.415 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:27.676 Nvme0n1 00:36:27.676 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:27.937 [ 00:36:27.937 { 00:36:27.937 "name": "Nvme0n1", 00:36:27.937 "aliases": [ 00:36:27.937 "1d3f505d-6566-4c95-9267-c05ab9964463" 00:36:27.937 ], 00:36:27.937 "product_name": "NVMe disk", 00:36:27.937 "block_size": 4096, 00:36:27.937 "num_blocks": 38912, 00:36:27.937 "uuid": "1d3f505d-6566-4c95-9267-c05ab9964463", 00:36:27.937 "numa_id": 0, 00:36:27.937 "assigned_rate_limits": { 00:36:27.937 "rw_ios_per_sec": 0, 00:36:27.937 "rw_mbytes_per_sec": 0, 00:36:27.937 "r_mbytes_per_sec": 0, 00:36:27.937 "w_mbytes_per_sec": 0 00:36:27.937 }, 00:36:27.937 "claimed": false, 00:36:27.937 "zoned": false, 00:36:27.937 "supported_io_types": { 00:36:27.937 "read": true, 00:36:27.937 "write": true, 00:36:27.937 "unmap": true, 00:36:27.937 "flush": true, 00:36:27.937 "reset": true, 00:36:27.937 "nvme_admin": true, 00:36:27.937 "nvme_io": true, 00:36:27.937 "nvme_io_md": false, 00:36:27.937 "write_zeroes": true, 00:36:27.937 "zcopy": false, 00:36:27.937 "get_zone_info": false, 00:36:27.937 "zone_management": false, 00:36:27.937 "zone_append": false, 00:36:27.937 "compare": true, 00:36:27.937 "compare_and_write": true, 00:36:27.937 "abort": true, 00:36:27.937 "seek_hole": false, 00:36:27.937 "seek_data": false, 00:36:27.937 "copy": true, 00:36:27.937 "nvme_iov_md": false 00:36:27.937 }, 00:36:27.937 "memory_domains": [ 00:36:27.937 { 00:36:27.937 "dma_device_id": "system", 00:36:27.937 "dma_device_type": 1 00:36:27.937 } 00:36:27.937 ], 00:36:27.937 "driver_specific": { 00:36:27.937 "nvme": [ 00:36:27.937 { 00:36:27.937 "trid": { 00:36:27.937 "trtype": "TCP", 00:36:27.937 "adrfam": "IPv4", 00:36:27.937 "traddr": "10.0.0.2", 00:36:27.937 "trsvcid": "4420", 00:36:27.937 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:27.937 }, 00:36:27.937 "ctrlr_data": { 00:36:27.937 "cntlid": 1, 00:36:27.937 "vendor_id": "0x8086", 00:36:27.937 "model_number": "SPDK bdev Controller", 00:36:27.937 "serial_number": "SPDK0", 00:36:27.937 "firmware_revision": "25.01", 00:36:27.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.937 "oacs": { 00:36:27.937 "security": 0, 00:36:27.937 "format": 0, 00:36:27.937 "firmware": 0, 00:36:27.937 "ns_manage": 0 00:36:27.937 }, 00:36:27.937 "multi_ctrlr": true, 00:36:27.937 "ana_reporting": false 00:36:27.937 }, 00:36:27.937 "vs": { 00:36:27.937 "nvme_version": "1.3" 00:36:27.937 }, 00:36:27.937 "ns_data": { 00:36:27.937 "id": 1, 00:36:27.937 "can_share": true 00:36:27.937 } 00:36:27.937 } 00:36:27.937 ], 00:36:27.937 "mp_policy": "active_passive" 00:36:27.937 } 00:36:27.937 } 00:36:27.937 ] 00:36:27.937 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=944934 00:36:27.937 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:27.937 23:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:27.937 Running I/O for 10 seconds... 00:36:28.878 Latency(us) 00:36:28.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:28.878 Nvme0n1 : 1.00 17525.00 68.46 0.00 0.00 0.00 0.00 0.00 00:36:28.878 =================================================================================================================== 00:36:28.878 Total : 17525.00 68.46 0.00 0.00 0.00 0.00 0.00 00:36:28.878 00:36:29.819 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:30.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.079 Nvme0n1 : 2.00 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:36:30.079 =================================================================================================================== 00:36:30.079 Total : 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:36:30.079 00:36:30.079 true 00:36:30.079 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:30.079 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:30.338 23:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:30.338 23:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:30.338 23:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 944934 00:36:30.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.910 Nvme0n1 : 3.00 17852.00 69.73 0.00 0.00 0.00 0.00 0.00 00:36:30.910 =================================================================================================================== 00:36:30.910 Total : 17852.00 69.73 0.00 0.00 0.00 0.00 0.00 00:36:30.910 00:36:31.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:31.850 Nvme0n1 : 4.00 17901.00 69.93 0.00 0.00 0.00 0.00 0.00 00:36:31.850 =================================================================================================================== 00:36:31.850 Total : 17901.00 69.93 0.00 0.00 0.00 0.00 0.00 00:36:31.851 00:36:33.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:33.234 Nvme0n1 : 5.00 18411.00 71.92 0.00 0.00 0.00 0.00 0.00 00:36:33.234 =================================================================================================================== 00:36:33.234 Total : 18411.00 71.92 0.00 0.00 0.00 0.00 0.00 00:36:33.234 00:36:34.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:34.174 Nvme0n1 : 6.00 19592.67 76.53 0.00 0.00 0.00 0.00 0.00 00:36:34.174 =================================================================================================================== 00:36:34.174 Total : 19592.67 76.53 0.00 0.00 0.00 0.00 0.00 00:36:34.174 00:36:35.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:35.115 Nvme0n1 : 7.00 20441.86 79.85 0.00 0.00 0.00 0.00 0.00 00:36:35.115 =================================================================================================================== 00:36:35.115 Total : 20441.86 79.85 0.00 0.00 0.00 0.00 0.00 00:36:35.115 00:36:36.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:36.057 Nvme0n1 : 8.00 21086.62 82.37 0.00 0.00 0.00 0.00 0.00 00:36:36.057 =================================================================================================================== 00:36:36.057 Total : 21086.62 82.37 0.00 0.00 0.00 0.00 0.00 00:36:36.057 00:36:37.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:37.000 Nvme0n1 : 9.00 21582.78 84.31 0.00 0.00 0.00 0.00 0.00 00:36:37.000 =================================================================================================================== 00:36:37.000 Total : 21582.78 84.31 0.00 0.00 0.00 0.00 0.00 00:36:37.000 00:36:37.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:37.942 Nvme0n1 : 10.00 21982.20 85.87 0.00 0.00 0.00 0.00 0.00 00:36:37.942 =================================================================================================================== 00:36:37.942 Total : 21982.20 85.87 0.00 0.00 0.00 0.00 0.00 00:36:37.942 00:36:37.942 00:36:37.942 Latency(us) 00:36:37.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:37.942 Nvme0n1 : 10.00 21985.12 85.88 0.00 0.00 5818.87 3194.88 31457.28 00:36:37.942 =================================================================================================================== 00:36:37.942 Total : 21985.12 85.88 0.00 0.00 5818.87 3194.88 31457.28 00:36:37.942 { 00:36:37.942 "results": [ 00:36:37.942 { 00:36:37.942 "job": "Nvme0n1", 00:36:37.942 "core_mask": "0x2", 00:36:37.942 "workload": "randwrite", 00:36:37.942 "status": "finished", 00:36:37.942 "queue_depth": 128, 00:36:37.942 "io_size": 4096, 00:36:37.942 "runtime": 10.004495, 00:36:37.942 "iops": 21985.117689598526, 00:36:37.942 "mibps": 85.87936597499424, 00:36:37.942 "io_failed": 0, 00:36:37.942 "io_timeout": 0, 00:36:37.942 "avg_latency_us": 5818.873567507767, 00:36:37.942 "min_latency_us": 3194.88, 00:36:37.942 "max_latency_us": 31457.28 00:36:37.942 } 00:36:37.942 ], 00:36:37.942 "core_count": 1 00:36:37.942 } 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 944832 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 944832 ']' 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 944832 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944832 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944832' 00:36:37.942 killing process with pid 944832 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 944832 00:36:37.942 Received shutdown signal, test time was about 10.000000 seconds 00:36:37.942 00:36:37.942 Latency(us) 00:36:37.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.942 =================================================================================================================== 00:36:37.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.942 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 944832 00:36:38.203 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:38.463 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:38.463 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:38.463 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 941054 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 941054 00:36:38.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 941054 Killed "${NVMF_APP[@]}" "$@" 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=947034 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 947034 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 947034 ']' 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:38.724 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:38.724 [2024-09-30 23:05:05.720102] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:38.724 [2024-09-30 23:05:05.721137] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:38.724 [2024-09-30 23:05:05.721186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.985 [2024-09-30 23:05:05.806516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.985 [2024-09-30 23:05:05.875118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.985 [2024-09-30 23:05:05.875158] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.985 [2024-09-30 23:05:05.875164] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.985 [2024-09-30 23:05:05.875169] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.985 [2024-09-30 23:05:05.875174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.985 [2024-09-30 23:05:05.875194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.985 [2024-09-30 23:05:05.927851] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:38.985 [2024-09-30 23:05:05.928059] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.558 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:39.818 [2024-09-30 23:05:06.721245] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:39.818 [2024-09-30 23:05:06.721469] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:39.818 [2024-09-30 23:05:06.721558] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1d3f505d-6566-4c95-9267-c05ab9964463 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1d3f505d-6566-4c95-9267-c05ab9964463 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:39.818 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:40.078 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d3f505d-6566-4c95-9267-c05ab9964463 -t 2000 00:36:40.078 [ 00:36:40.078 { 00:36:40.078 "name": "1d3f505d-6566-4c95-9267-c05ab9964463", 00:36:40.078 "aliases": [ 00:36:40.078 "lvs/lvol" 00:36:40.078 ], 00:36:40.078 "product_name": "Logical Volume", 00:36:40.078 "block_size": 4096, 00:36:40.078 "num_blocks": 38912, 00:36:40.078 "uuid": "1d3f505d-6566-4c95-9267-c05ab9964463", 00:36:40.078 "assigned_rate_limits": { 00:36:40.078 "rw_ios_per_sec": 0, 00:36:40.078 "rw_mbytes_per_sec": 0, 00:36:40.078 "r_mbytes_per_sec": 0, 00:36:40.078 "w_mbytes_per_sec": 0 00:36:40.078 }, 00:36:40.078 "claimed": false, 00:36:40.078 "zoned": false, 00:36:40.078 "supported_io_types": { 00:36:40.078 "read": true, 00:36:40.078 "write": true, 00:36:40.078 "unmap": true, 00:36:40.078 "flush": false, 00:36:40.078 "reset": true, 00:36:40.078 "nvme_admin": false, 00:36:40.078 "nvme_io": false, 00:36:40.078 "nvme_io_md": false, 00:36:40.078 "write_zeroes": true, 00:36:40.078 "zcopy": false, 00:36:40.078 "get_zone_info": false, 00:36:40.078 "zone_management": false, 00:36:40.078 "zone_append": false, 00:36:40.078 "compare": false, 00:36:40.078 "compare_and_write": false, 00:36:40.078 "abort": false, 00:36:40.078 "seek_hole": true, 00:36:40.078 "seek_data": true, 00:36:40.078 "copy": false, 00:36:40.078 "nvme_iov_md": false 00:36:40.078 }, 00:36:40.078 "driver_specific": { 00:36:40.078 "lvol": { 00:36:40.078 "lvol_store_uuid": "46a36bdc-b67f-400e-9f41-6711ddd671e5", 00:36:40.078 "base_bdev": "aio_bdev", 00:36:40.078 "thin_provision": false, 00:36:40.078 "num_allocated_clusters": 38, 00:36:40.078 "snapshot": false, 00:36:40.078 "clone": false, 00:36:40.078 "esnap_clone": false 00:36:40.078 } 00:36:40.078 } 00:36:40.078 } 00:36:40.078 ] 00:36:40.078 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:36:40.078 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:40.078 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:40.339 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:40.339 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:40.339 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:40.600 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:40.600 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:40.600 [2024-09-30 23:05:07.571701] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:40.861 request: 00:36:40.861 { 00:36:40.861 "uuid": "46a36bdc-b67f-400e-9f41-6711ddd671e5", 00:36:40.861 "method": "bdev_lvol_get_lvstores", 00:36:40.861 "req_id": 1 00:36:40.861 } 00:36:40.861 Got JSON-RPC error response 00:36:40.861 response: 00:36:40.861 { 00:36:40.861 "code": -19, 00:36:40.861 "message": "No such device" 00:36:40.861 } 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:40.861 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:41.121 aio_bdev 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d3f505d-6566-4c95-9267-c05ab9964463 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1d3f505d-6566-4c95-9267-c05ab9964463 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:41.122 23:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:41.122 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d3f505d-6566-4c95-9267-c05ab9964463 -t 2000 00:36:41.382 [ 00:36:41.382 { 00:36:41.382 "name": "1d3f505d-6566-4c95-9267-c05ab9964463", 00:36:41.382 "aliases": [ 00:36:41.382 "lvs/lvol" 00:36:41.382 ], 00:36:41.382 "product_name": "Logical Volume", 00:36:41.382 "block_size": 4096, 00:36:41.382 "num_blocks": 38912, 00:36:41.382 "uuid": "1d3f505d-6566-4c95-9267-c05ab9964463", 00:36:41.382 "assigned_rate_limits": { 00:36:41.382 "rw_ios_per_sec": 0, 00:36:41.382 "rw_mbytes_per_sec": 0, 00:36:41.382 "r_mbytes_per_sec": 0, 00:36:41.382 "w_mbytes_per_sec": 0 00:36:41.382 }, 00:36:41.383 "claimed": false, 00:36:41.383 "zoned": false, 00:36:41.383 "supported_io_types": { 00:36:41.383 "read": true, 00:36:41.383 "write": true, 00:36:41.383 "unmap": true, 00:36:41.383 "flush": false, 00:36:41.383 "reset": true, 00:36:41.383 "nvme_admin": false, 00:36:41.383 "nvme_io": false, 00:36:41.383 "nvme_io_md": false, 00:36:41.383 "write_zeroes": true, 00:36:41.383 "zcopy": false, 00:36:41.383 "get_zone_info": false, 00:36:41.383 "zone_management": false, 00:36:41.383 "zone_append": false, 00:36:41.383 "compare": false, 00:36:41.383 "compare_and_write": false, 00:36:41.383 "abort": false, 00:36:41.383 "seek_hole": true, 00:36:41.383 "seek_data": true, 00:36:41.383 "copy": false, 00:36:41.383 "nvme_iov_md": false 00:36:41.383 }, 00:36:41.383 "driver_specific": { 00:36:41.383 "lvol": { 00:36:41.383 "lvol_store_uuid": "46a36bdc-b67f-400e-9f41-6711ddd671e5", 00:36:41.383 "base_bdev": "aio_bdev", 00:36:41.383 "thin_provision": false, 00:36:41.383 "num_allocated_clusters": 38, 00:36:41.383 "snapshot": false, 00:36:41.383 "clone": false, 00:36:41.383 "esnap_clone": false 00:36:41.383 } 00:36:41.383 } 00:36:41.383 } 00:36:41.383 ] 00:36:41.383 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:36:41.383 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:41.383 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:41.644 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:41.644 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:41.644 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:41.644 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:41.644 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d3f505d-6566-4c95-9267-c05ab9964463 00:36:41.904 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 46a36bdc-b67f-400e-9f41-6711ddd671e5 00:36:42.164 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:42.164 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:42.164 00:36:42.164 real 0m17.356s 00:36:42.164 user 0m35.294s 00:36:42.164 sys 0m3.023s 00:36:42.164 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:42.164 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:42.164 ************************************ 00:36:42.164 END TEST lvs_grow_dirty 00:36:42.164 ************************************ 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:36:42.425 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:42.426 nvmf_trace.0 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:42.426 rmmod nvme_tcp 00:36:42.426 rmmod nvme_fabrics 00:36:42.426 rmmod nvme_keyring 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 947034 ']' 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 947034 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 947034 ']' 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 947034 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 947034 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 947034' 00:36:42.426 killing process with pid 947034 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 947034 00:36:42.426 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 947034 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.687 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.601 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:44.601 00:36:44.601 real 0m44.305s 00:36:44.601 user 0m53.648s 00:36:44.601 sys 0m10.441s 00:36:44.601 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:44.601 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:44.601 ************************************ 00:36:44.601 END TEST nvmf_lvs_grow 00:36:44.601 ************************************ 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:44.861 ************************************ 00:36:44.861 START TEST nvmf_bdev_io_wait 00:36:44.861 ************************************ 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:44.861 * Looking for test storage... 00:36:44.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:44.861 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:44.862 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:45.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.124 --rc genhtml_branch_coverage=1 00:36:45.124 --rc genhtml_function_coverage=1 00:36:45.124 --rc genhtml_legend=1 00:36:45.124 --rc geninfo_all_blocks=1 00:36:45.124 --rc geninfo_unexecuted_blocks=1 00:36:45.124 00:36:45.124 ' 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:45.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.124 --rc genhtml_branch_coverage=1 00:36:45.124 --rc genhtml_function_coverage=1 00:36:45.124 --rc genhtml_legend=1 00:36:45.124 --rc geninfo_all_blocks=1 00:36:45.124 --rc geninfo_unexecuted_blocks=1 00:36:45.124 00:36:45.124 ' 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:45.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.124 --rc genhtml_branch_coverage=1 00:36:45.124 --rc genhtml_function_coverage=1 00:36:45.124 --rc genhtml_legend=1 00:36:45.124 --rc geninfo_all_blocks=1 00:36:45.124 --rc geninfo_unexecuted_blocks=1 00:36:45.124 00:36:45.124 ' 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:45.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.124 --rc genhtml_branch_coverage=1 00:36:45.124 --rc genhtml_function_coverage=1 00:36:45.124 --rc genhtml_legend=1 00:36:45.124 --rc geninfo_all_blocks=1 00:36:45.124 --rc geninfo_unexecuted_blocks=1 00:36:45.124 00:36:45.124 ' 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:45.124 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:45.125 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:53.266 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:53.266 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.266 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:53.267 Found net devices under 0000:31:00.0: cvl_0_0 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:53.267 Found net devices under 0000:31:00.1: cvl_0_1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:53.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:36:53.267 00:36:53.267 --- 10.0.0.2 ping statistics --- 00:36:53.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.267 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:36:53.267 00:36:53.267 --- 10.0.0.1 ping statistics --- 00:36:53.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.267 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=952002 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 952002 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 952002 ']' 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:53.267 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.267 [2024-09-30 23:05:19.468141] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:53.267 [2024-09-30 23:05:19.469020] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:53.267 [2024-09-30 23:05:19.469060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.267 [2024-09-30 23:05:19.550931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:53.267 [2024-09-30 23:05:19.649119] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.267 [2024-09-30 23:05:19.649180] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.267 [2024-09-30 23:05:19.649188] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.267 [2024-09-30 23:05:19.649196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.267 [2024-09-30 23:05:19.649206] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.267 [2024-09-30 23:05:19.649374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.267 [2024-09-30 23:05:19.649535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.267 [2024-09-30 23:05:19.649696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.267 [2024-09-30 23:05:19.649696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:53.267 [2024-09-30 23:05:19.650052] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:53.267 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:53.267 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:36:53.268 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:53.268 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:53.268 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.529 [2024-09-30 23:05:20.381001] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:53.529 [2024-09-30 23:05:20.381022] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:53.529 [2024-09-30 23:05:20.381433] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:53.529 [2024-09-30 23:05:20.381564] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.529 [2024-09-30 23:05:20.390527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:53.529 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 Malloc0 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:53.530 [2024-09-30 23:05:20.482826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=952354 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=952356 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:53.530 { 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme$subsystem", 00:36:53.530 "trtype": "$TEST_TRANSPORT", 00:36:53.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "$NVMF_PORT", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.530 "hdgst": ${hdgst:-false}, 00:36:53.530 "ddgst": ${ddgst:-false} 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 } 00:36:53.530 EOF 00:36:53.530 )") 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=952358 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=952361 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:53.530 { 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme$subsystem", 00:36:53.530 "trtype": "$TEST_TRANSPORT", 00:36:53.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "$NVMF_PORT", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.530 "hdgst": ${hdgst:-false}, 00:36:53.530 "ddgst": ${ddgst:-false} 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 } 00:36:53.530 EOF 00:36:53.530 )") 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:53.530 { 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme$subsystem", 00:36:53.530 "trtype": "$TEST_TRANSPORT", 00:36:53.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "$NVMF_PORT", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.530 "hdgst": ${hdgst:-false}, 00:36:53.530 "ddgst": ${ddgst:-false} 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 } 00:36:53.530 EOF 00:36:53.530 )") 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:53.530 { 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme$subsystem", 00:36:53.530 "trtype": "$TEST_TRANSPORT", 00:36:53.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "$NVMF_PORT", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.530 "hdgst": ${hdgst:-false}, 00:36:53.530 "ddgst": ${ddgst:-false} 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 } 00:36:53.530 EOF 00:36:53.530 )") 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 952354 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme1", 00:36:53.530 "trtype": "tcp", 00:36:53.530 "traddr": "10.0.0.2", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "4420", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.530 "hdgst": false, 00:36:53.530 "ddgst": false 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 }' 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme1", 00:36:53.530 "trtype": "tcp", 00:36:53.530 "traddr": "10.0.0.2", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "4420", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.530 "hdgst": false, 00:36:53.530 "ddgst": false 00:36:53.530 }, 00:36:53.530 "method": "bdev_nvme_attach_controller" 00:36:53.530 }' 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:36:53.530 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:53.530 "params": { 00:36:53.530 "name": "Nvme1", 00:36:53.530 "trtype": "tcp", 00:36:53.530 "traddr": "10.0.0.2", 00:36:53.530 "adrfam": "ipv4", 00:36:53.530 "trsvcid": "4420", 00:36:53.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.530 "hdgst": false, 00:36:53.531 "ddgst": false 00:36:53.531 }, 00:36:53.531 "method": "bdev_nvme_attach_controller" 00:36:53.531 }' 00:36:53.531 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:36:53.531 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:53.531 "params": { 00:36:53.531 "name": "Nvme1", 00:36:53.531 "trtype": "tcp", 00:36:53.531 "traddr": "10.0.0.2", 00:36:53.531 "adrfam": "ipv4", 00:36:53.531 "trsvcid": "4420", 00:36:53.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.531 "hdgst": false, 00:36:53.531 "ddgst": false 00:36:53.531 }, 00:36:53.531 "method": "bdev_nvme_attach_controller" 00:36:53.531 }' 00:36:53.531 [2024-09-30 23:05:20.538067] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:53.531 [2024-09-30 23:05:20.538121] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:36:53.531 [2024-09-30 23:05:20.538244] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:53.531 [2024-09-30 23:05:20.538296] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:36:53.531 [2024-09-30 23:05:20.539593] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:53.531 [2024-09-30 23:05:20.539640] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:36:53.531 [2024-09-30 23:05:20.541057] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:36:53.531 [2024-09-30 23:05:20.541103] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:36:53.791 [2024-09-30 23:05:20.695908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.791 [2024-09-30 23:05:20.747981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:53.791 [2024-09-30 23:05:20.751818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.791 [2024-09-30 23:05:20.800094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.791 [2024-09-30 23:05:20.804431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:54.053 [2024-09-30 23:05:20.848041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.053 [2024-09-30 23:05:20.851129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:54.053 [2024-09-30 23:05:20.898331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:54.053 Running I/O for 1 seconds... 00:36:54.314 Running I/O for 1 seconds... 00:36:54.314 Running I/O for 1 seconds... 00:36:54.314 Running I/O for 1 seconds... 00:36:55.258 13517.00 IOPS, 52.80 MiB/s 00:36:55.258 Latency(us) 00:36:55.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.258 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:55.258 Nvme1n1 : 1.01 13578.58 53.04 0.00 0.00 9398.20 4724.05 11578.03 00:36:55.258 =================================================================================================================== 00:36:55.258 Total : 13578.58 53.04 0.00 0.00 9398.20 4724.05 11578.03 00:36:55.258 10799.00 IOPS, 42.18 MiB/s 00:36:55.258 Latency(us) 00:36:55.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.258 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:55.258 Nvme1n1 : 1.01 10848.16 42.38 0.00 0.00 11753.35 5625.17 15400.96 00:36:55.258 =================================================================================================================== 00:36:55.258 Total : 10848.16 42.38 0.00 0.00 11753.35 5625.17 15400.96 00:36:55.258 11151.00 IOPS, 43.56 MiB/s 00:36:55.258 Latency(us) 00:36:55.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.258 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:55.258 Nvme1n1 : 1.01 11224.39 43.85 0.00 0.00 11364.14 2307.41 18240.85 00:36:55.258 =================================================================================================================== 00:36:55.258 Total : 11224.39 43.85 0.00 0.00 11364.14 2307.41 18240.85 00:36:55.258 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 952356 00:36:55.258 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 952358 00:36:55.518 186000.00 IOPS, 726.56 MiB/s 00:36:55.518 Latency(us) 00:36:55.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.518 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:55.518 Nvme1n1 : 1.00 185619.87 725.08 0.00 0.00 685.76 310.61 2048.00 00:36:55.518 =================================================================================================================== 00:36:55.518 Total : 185619.87 725.08 0.00 0.00 685.76 310.61 2048.00 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 952361 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:55.518 rmmod nvme_tcp 00:36:55.518 rmmod nvme_fabrics 00:36:55.518 rmmod nvme_keyring 00:36:55.518 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 952002 ']' 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 952002 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 952002 ']' 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 952002 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:55.519 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952002 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952002' 00:36:55.780 killing process with pid 952002 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 952002 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 952002 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.780 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:58.327 00:36:58.327 real 0m13.113s 00:36:58.327 user 0m16.270s 00:36:58.327 sys 0m7.559s 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:58.327 ************************************ 00:36:58.327 END TEST nvmf_bdev_io_wait 00:36:58.327 ************************************ 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:58.327 ************************************ 00:36:58.327 START TEST nvmf_queue_depth 00:36:58.327 ************************************ 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:58.327 * Looking for test storage... 00:36:58.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:36:58.327 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:58.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.327 --rc genhtml_branch_coverage=1 00:36:58.327 --rc genhtml_function_coverage=1 00:36:58.327 --rc genhtml_legend=1 00:36:58.327 --rc geninfo_all_blocks=1 00:36:58.327 --rc geninfo_unexecuted_blocks=1 00:36:58.327 00:36:58.327 ' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:58.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.327 --rc genhtml_branch_coverage=1 00:36:58.327 --rc genhtml_function_coverage=1 00:36:58.327 --rc genhtml_legend=1 00:36:58.327 --rc geninfo_all_blocks=1 00:36:58.327 --rc geninfo_unexecuted_blocks=1 00:36:58.327 00:36:58.327 ' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:58.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.327 --rc genhtml_branch_coverage=1 00:36:58.327 --rc genhtml_function_coverage=1 00:36:58.327 --rc genhtml_legend=1 00:36:58.327 --rc geninfo_all_blocks=1 00:36:58.327 --rc geninfo_unexecuted_blocks=1 00:36:58.327 00:36:58.327 ' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:58.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.327 --rc genhtml_branch_coverage=1 00:36:58.327 --rc genhtml_function_coverage=1 00:36:58.327 --rc genhtml_legend=1 00:36:58.327 --rc geninfo_all_blocks=1 00:36:58.327 --rc geninfo_unexecuted_blocks=1 00:36:58.327 00:36:58.327 ' 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.327 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:58.328 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:06.493 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:06.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:06.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:06.494 Found net devices under 0000:31:00.0: cvl_0_0 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:06.494 Found net devices under 0000:31:00.1: cvl_0_1 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.494 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:06.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:37:06.495 00:37:06.495 --- 10.0.0.2 ping statistics --- 00:37:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.495 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:37:06.495 00:37:06.495 --- 10.0.0.1 ping statistics --- 00:37:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.495 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=956914 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 956914 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 956914 ']' 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:06.495 23:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.495 [2024-09-30 23:05:32.817077] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:06.495 [2024-09-30 23:05:32.818232] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:37:06.495 [2024-09-30 23:05:32.818287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.495 [2024-09-30 23:05:32.912017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.495 [2024-09-30 23:05:33.005487] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.495 [2024-09-30 23:05:33.005544] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.495 [2024-09-30 23:05:33.005552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.495 [2024-09-30 23:05:33.005559] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.495 [2024-09-30 23:05:33.005567] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.495 [2024-09-30 23:05:33.005596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.495 [2024-09-30 23:05:33.081022] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:06.495 [2024-09-30 23:05:33.081314] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:06.756 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:06.756 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 [2024-09-30 23:05:33.678460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 Malloc0 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.757 [2024-09-30 23:05:33.766658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.757 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=957138 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 957138 /var/tmp/bdevperf.sock 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 957138 ']' 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.018 23:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:07.018 [2024-09-30 23:05:33.832256] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:37:07.018 [2024-09-30 23:05:33.832318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957138 ] 00:37:07.018 [2024-09-30 23:05:33.914174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.018 [2024-09-30 23:05:34.009975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:07.961 NVMe0n1 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.961 23:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:07.961 Running I/O for 10 seconds... 00:37:18.006 8374.00 IOPS, 32.71 MiB/s 8705.00 IOPS, 34.00 MiB/s 9552.67 IOPS, 37.32 MiB/s 10376.50 IOPS, 40.53 MiB/s 11046.40 IOPS, 43.15 MiB/s 11439.17 IOPS, 44.68 MiB/s 11753.14 IOPS, 45.91 MiB/s 12019.38 IOPS, 46.95 MiB/s 12182.56 IOPS, 47.59 MiB/s 12321.50 IOPS, 48.13 MiB/s 00:37:18.006 Latency(us) 00:37:18.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.006 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:18.006 Verification LBA range: start 0x0 length 0x4000 00:37:18.006 NVMe0n1 : 10.05 12362.95 48.29 0.00 0.00 82516.35 9284.27 68157.44 00:37:18.006 =================================================================================================================== 00:37:18.006 Total : 12362.95 48.29 0.00 0.00 82516.35 9284.27 68157.44 00:37:18.006 { 00:37:18.006 "results": [ 00:37:18.006 { 00:37:18.006 "job": "NVMe0n1", 00:37:18.006 "core_mask": "0x1", 00:37:18.006 "workload": "verify", 00:37:18.006 "status": "finished", 00:37:18.006 "verify_range": { 00:37:18.006 "start": 0, 00:37:18.006 "length": 16384 00:37:18.006 }, 00:37:18.006 "queue_depth": 1024, 00:37:18.006 "io_size": 4096, 00:37:18.006 "runtime": 10.049299, 00:37:18.006 "iops": 12362.951883509486, 00:37:18.006 "mibps": 48.29278079495893, 00:37:18.006 "io_failed": 0, 00:37:18.006 "io_timeout": 0, 00:37:18.006 "avg_latency_us": 82516.34842049061, 00:37:18.006 "min_latency_us": 9284.266666666666, 00:37:18.006 "max_latency_us": 68157.44 00:37:18.006 } 00:37:18.006 ], 00:37:18.006 "core_count": 1 00:37:18.006 } 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 957138 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 957138 ']' 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 957138 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 957138 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 957138' 00:37:18.006 killing process with pid 957138 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 957138 00:37:18.006 Received shutdown signal, test time was about 10.000000 seconds 00:37:18.006 00:37:18.006 Latency(us) 00:37:18.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.006 =================================================================================================================== 00:37:18.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:18.006 23:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 957138 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.267 rmmod nvme_tcp 00:37:18.267 rmmod nvme_fabrics 00:37:18.267 rmmod nvme_keyring 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 956914 ']' 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 956914 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 956914 ']' 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 956914 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 956914 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 956914' 00:37:18.267 killing process with pid 956914 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 956914 00:37:18.267 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 956914 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.528 23:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.438 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.438 00:37:20.438 real 0m22.536s 00:37:20.438 user 0m24.121s 00:37:20.438 sys 0m7.729s 00:37:20.438 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.438 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:20.438 ************************************ 00:37:20.438 END TEST nvmf_queue_depth 00:37:20.438 ************************************ 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:20.700 ************************************ 00:37:20.700 START TEST nvmf_target_multipath 00:37:20.700 ************************************ 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:20.700 * Looking for test storage... 00:37:20.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:20.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.700 --rc genhtml_branch_coverage=1 00:37:20.700 --rc genhtml_function_coverage=1 00:37:20.700 --rc genhtml_legend=1 00:37:20.700 --rc geninfo_all_blocks=1 00:37:20.700 --rc geninfo_unexecuted_blocks=1 00:37:20.700 00:37:20.700 ' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:20.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.700 --rc genhtml_branch_coverage=1 00:37:20.700 --rc genhtml_function_coverage=1 00:37:20.700 --rc genhtml_legend=1 00:37:20.700 --rc geninfo_all_blocks=1 00:37:20.700 --rc geninfo_unexecuted_blocks=1 00:37:20.700 00:37:20.700 ' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:20.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.700 --rc genhtml_branch_coverage=1 00:37:20.700 --rc genhtml_function_coverage=1 00:37:20.700 --rc genhtml_legend=1 00:37:20.700 --rc geninfo_all_blocks=1 00:37:20.700 --rc geninfo_unexecuted_blocks=1 00:37:20.700 00:37:20.700 ' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:20.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.700 --rc genhtml_branch_coverage=1 00:37:20.700 --rc genhtml_function_coverage=1 00:37:20.700 --rc genhtml_legend=1 00:37:20.700 --rc geninfo_all_blocks=1 00:37:20.700 --rc geninfo_unexecuted_blocks=1 00:37:20.700 00:37:20.700 ' 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.700 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.961 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.961 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.961 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.961 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.961 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.962 23:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.108 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:29.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:29.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:29.109 Found net devices under 0000:31:00.0: cvl_0_0 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:29.109 Found net devices under 0000:31:00.1: cvl_0_1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:37:29.109 00:37:29.109 --- 10.0.0.2 ping statistics --- 00:37:29.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.109 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:37:29.109 00:37:29.109 --- 10.0.0.1 ping statistics --- 00:37:29.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.109 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:29.109 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:29.110 only one NIC for nvmf test 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:29.110 rmmod nvme_tcp 00:37:29.110 rmmod nvme_fabrics 00:37:29.110 rmmod nvme_keyring 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.110 23:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:31.026 00:37:31.026 real 0m10.071s 00:37:31.026 user 0m2.088s 00:37:31.026 sys 0m5.921s 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:31.026 ************************************ 00:37:31.026 END TEST nvmf_target_multipath 00:37:31.026 ************************************ 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:31.026 ************************************ 00:37:31.026 START TEST nvmf_zcopy 00:37:31.026 ************************************ 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:31.026 * Looking for test storage... 00:37:31.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:31.026 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:31.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.027 --rc genhtml_branch_coverage=1 00:37:31.027 --rc genhtml_function_coverage=1 00:37:31.027 --rc genhtml_legend=1 00:37:31.027 --rc geninfo_all_blocks=1 00:37:31.027 --rc geninfo_unexecuted_blocks=1 00:37:31.027 00:37:31.027 ' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:31.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.027 --rc genhtml_branch_coverage=1 00:37:31.027 --rc genhtml_function_coverage=1 00:37:31.027 --rc genhtml_legend=1 00:37:31.027 --rc geninfo_all_blocks=1 00:37:31.027 --rc geninfo_unexecuted_blocks=1 00:37:31.027 00:37:31.027 ' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:31.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.027 --rc genhtml_branch_coverage=1 00:37:31.027 --rc genhtml_function_coverage=1 00:37:31.027 --rc genhtml_legend=1 00:37:31.027 --rc geninfo_all_blocks=1 00:37:31.027 --rc geninfo_unexecuted_blocks=1 00:37:31.027 00:37:31.027 ' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:31.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.027 --rc genhtml_branch_coverage=1 00:37:31.027 --rc genhtml_function_coverage=1 00:37:31.027 --rc genhtml_legend=1 00:37:31.027 --rc geninfo_all_blocks=1 00:37:31.027 --rc geninfo_unexecuted_blocks=1 00:37:31.027 00:37:31.027 ' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:31.027 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.028 23:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:39.170 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:39.171 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:39.171 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:39.171 Found net devices under 0000:31:00.0: cvl_0_0 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:39.171 Found net devices under 0000:31:00.1: cvl_0_1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:39.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:39.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:37:39.171 00:37:39.171 --- 10.0.0.2 ping statistics --- 00:37:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.171 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:39.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:39.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:37:39.171 00:37:39.171 --- 10.0.0.1 ping statistics --- 00:37:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.171 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=967869 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 967869 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 967869 ']' 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:39.171 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:39.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:39.172 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:39.172 23:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.172 [2024-09-30 23:06:05.576383] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:39.172 [2024-09-30 23:06:05.577944] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:37:39.172 [2024-09-30 23:06:05.578020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:39.172 [2024-09-30 23:06:05.668341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.172 [2024-09-30 23:06:05.762401] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.172 [2024-09-30 23:06:05.762462] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.172 [2024-09-30 23:06:05.762471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.172 [2024-09-30 23:06:05.762478] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.172 [2024-09-30 23:06:05.762485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.172 [2024-09-30 23:06:05.762512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.172 [2024-09-30 23:06:05.837561] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:39.172 [2024-09-30 23:06:05.837845] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.432 [2024-09-30 23:06:06.431393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.432 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.693 [2024-09-30 23:06:06.459700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.693 malloc0 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:39.693 { 00:37:39.693 "params": { 00:37:39.693 "name": "Nvme$subsystem", 00:37:39.693 "trtype": "$TEST_TRANSPORT", 00:37:39.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.693 "adrfam": "ipv4", 00:37:39.693 "trsvcid": "$NVMF_PORT", 00:37:39.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.693 "hdgst": ${hdgst:-false}, 00:37:39.693 "ddgst": ${ddgst:-false} 00:37:39.693 }, 00:37:39.693 "method": "bdev_nvme_attach_controller" 00:37:39.693 } 00:37:39.693 EOF 00:37:39.693 )") 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:37:39.693 23:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:39.693 "params": { 00:37:39.693 "name": "Nvme1", 00:37:39.693 "trtype": "tcp", 00:37:39.693 "traddr": "10.0.0.2", 00:37:39.693 "adrfam": "ipv4", 00:37:39.693 "trsvcid": "4420", 00:37:39.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:39.693 "hdgst": false, 00:37:39.693 "ddgst": false 00:37:39.693 }, 00:37:39.693 "method": "bdev_nvme_attach_controller" 00:37:39.693 }' 00:37:39.693 [2024-09-30 23:06:06.559044] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:37:39.693 [2024-09-30 23:06:06.559129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968054 ] 00:37:39.693 [2024-09-30 23:06:06.644384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.954 [2024-09-30 23:06:06.719056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.954 Running I/O for 10 seconds... 00:37:50.264 6352.00 IOPS, 49.62 MiB/s 6401.50 IOPS, 50.01 MiB/s 6400.33 IOPS, 50.00 MiB/s 6406.25 IOPS, 50.05 MiB/s 6660.60 IOPS, 52.04 MiB/s 7141.83 IOPS, 55.80 MiB/s 7485.86 IOPS, 58.48 MiB/s 7749.00 IOPS, 60.54 MiB/s 7949.56 IOPS, 62.11 MiB/s 8111.10 IOPS, 63.37 MiB/s 00:37:50.264 Latency(us) 00:37:50.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.265 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:37:50.265 Verification LBA range: start 0x0 length 0x1000 00:37:50.265 Nvme1n1 : 10.05 8081.81 63.14 0.00 0.00 15732.14 2921.81 44127.57 00:37:50.265 =================================================================================================================== 00:37:50.265 Total : 8081.81 63.14 0.00 0.00 15732.14 2921.81 44127.57 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=970507 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:50.265 { 00:37:50.265 "params": { 00:37:50.265 "name": "Nvme$subsystem", 00:37:50.265 "trtype": "$TEST_TRANSPORT", 00:37:50.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.265 "adrfam": "ipv4", 00:37:50.265 "trsvcid": "$NVMF_PORT", 00:37:50.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.265 "hdgst": ${hdgst:-false}, 00:37:50.265 "ddgst": ${ddgst:-false} 00:37:50.265 }, 00:37:50.265 "method": "bdev_nvme_attach_controller" 00:37:50.265 } 00:37:50.265 EOF 00:37:50.265 )") 00:37:50.265 [2024-09-30 23:06:17.142929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.142958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:37:50.265 23:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:50.265 "params": { 00:37:50.265 "name": "Nvme1", 00:37:50.265 "trtype": "tcp", 00:37:50.265 "traddr": "10.0.0.2", 00:37:50.265 "adrfam": "ipv4", 00:37:50.265 "trsvcid": "4420", 00:37:50.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:50.265 "hdgst": false, 00:37:50.265 "ddgst": false 00:37:50.265 }, 00:37:50.265 "method": "bdev_nvme_attach_controller" 00:37:50.265 }' 00:37:50.265 [2024-09-30 23:06:17.154892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.154905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.166890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.166902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.178890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.178900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.186525] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:37:50.265 [2024-09-30 23:06:17.186573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970507 ] 00:37:50.265 [2024-09-30 23:06:17.190889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.190900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.202890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.202901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.214889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.214900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.226889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.226900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.238889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.238900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.250890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.250902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.261386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.265 [2024-09-30 23:06:17.262889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.262899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.265 [2024-09-30 23:06:17.274889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.265 [2024-09-30 23:06:17.274901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.286891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.286904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.298890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.298906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.310890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.310901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.315578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.526 [2024-09-30 23:06:17.322889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.322900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.334898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.334911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.346896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.346907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.358891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.358902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.370890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.370900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.382901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.382914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.394895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.394905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.406891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.406904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.418891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.418903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.430891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.430904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.442898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.442912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 Running I/O for 5 seconds... 00:37:50.526 [2024-09-30 23:06:17.458841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.458857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.471094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.471109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.485823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.485838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.498908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.498922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.510953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.510968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.524085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.524100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.526 [2024-09-30 23:06:17.538645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.526 [2024-09-30 23:06:17.538660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.550600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.550615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.563659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.563673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.578866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.578881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.591568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.591582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.606314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.606329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.619150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.619165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.631538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.631552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.645968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.645983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.659380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.659394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.674254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.674269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.687081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.687096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.698928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.698942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.711788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.711803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.726233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.726248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.739686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.739701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.754425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.754440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.767520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.767534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.782457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.782472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:50.787 [2024-09-30 23:06:17.794816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:50.787 [2024-09-30 23:06:17.794831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.807259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.807274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.819557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.819571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.834374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.834388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.847207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.847221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.862312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.862327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.875045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.875058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.890151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.890166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.902762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.902776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.915428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.915443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.930134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.930148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.943310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.943324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.048 [2024-09-30 23:06:17.958428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.048 [2024-09-30 23:06:17.958443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:17.971155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:17.971169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:17.986025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:17.986040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:17.998584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:17.998599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:18.011274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:18.011288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:18.026286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:18.026301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:18.039042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:18.039057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.049 [2024-09-30 23:06:18.051924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.049 [2024-09-30 23:06:18.051938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.309 [2024-09-30 23:06:18.066888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.309 [2024-09-30 23:06:18.066910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.309 [2024-09-30 23:06:18.079526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.309 [2024-09-30 23:06:18.079540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.309 [2024-09-30 23:06:18.093915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.093930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.106592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.106607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.119186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.119201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.133953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.133968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.146994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.147009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.159404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.159418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.174230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.174248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.187097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.187111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.202061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.202075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.214775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.214790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.227584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.227598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.241828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.241843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.255125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.255140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.266773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.266787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.278973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.278987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.291538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.291552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.305876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.305891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.310 [2024-09-30 23:06:18.319046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.310 [2024-09-30 23:06:18.319062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.570 [2024-09-30 23:06:18.331683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.331698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.346540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.346554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.359192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.359206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.373981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.373995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.387255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.387269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.402172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.402187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.414825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.414841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.427547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.427565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.441628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.441643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 18869.00 IOPS, 147.41 MiB/s [2024-09-30 23:06:18.454804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.454819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.467091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.467106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.479682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.479696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.493678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.493692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.506520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.506535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.518865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.518879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.531992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.532007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.546113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.546127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.558916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.558931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.571333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.571346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.571 [2024-09-30 23:06:18.585916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.571 [2024-09-30 23:06:18.585930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.598928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.598943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.611479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.611493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.626334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.626348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.639047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.639062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.651831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.651845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.666553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.666567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.679407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.679425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.694040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.694054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.707100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.707114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.722078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.722092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.734732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.734747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.746949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.831 [2024-09-30 23:06:18.746964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.831 [2024-09-30 23:06:18.759499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.759513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.774041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.774055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.786854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.786869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.799752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.799766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.813805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.813819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.826699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.826713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:51.832 [2024-09-30 23:06:18.838973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:51.832 [2024-09-30 23:06:18.838987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.850811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.850826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.863466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.863480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.878141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.878155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.890956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.890970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.903222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.903237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.918272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.918287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.931400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.931414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.946101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.946116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.958400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.958414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.971042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.971057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.982771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.982785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:18.995111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:18.995125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:19.007658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.092 [2024-09-30 23:06:19.007672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.092 [2024-09-30 23:06:19.022511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.022526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.035318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.035333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.050041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.050055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.062897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.062911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.075600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.075614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.090272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.090286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.093 [2024-09-30 23:06:19.103127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.093 [2024-09-30 23:06:19.103142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.117850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.117866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.130954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.130969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.143772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.143787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.158567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.158582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.171416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.171430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.186328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.186342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.198834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.198850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.211232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.211246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.226335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.226350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.239227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.239241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.253965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.253980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.267372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.267386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.281778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.281793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.294875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.294889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.307111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.307124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.321792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.321807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.353 [2024-09-30 23:06:19.334553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.353 [2024-09-30 23:06:19.334567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.354 [2024-09-30 23:06:19.347113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.354 [2024-09-30 23:06:19.347127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.354 [2024-09-30 23:06:19.359673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.354 [2024-09-30 23:06:19.359687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.374358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.374373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.387159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.387173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.402252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.402266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.415140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.415153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.429843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.429857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.442689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.442704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.454804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.454818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 18943.00 IOPS, 147.99 MiB/s [2024-09-30 23:06:19.467725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.467740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.481501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.481516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.494582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.494596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.507017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.507032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.519503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.519518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.534688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.534704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.547704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.547719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.562323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.562338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.575131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.575146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.589615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.589629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.602798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.602812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.614 [2024-09-30 23:06:19.615299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.614 [2024-09-30 23:06:19.615313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.630462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.630477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.643251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.643265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.658179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.658194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.671297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.671311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.686310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.686329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.699155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.699168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.714118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.714133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.726961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.726975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.739694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.739708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.754441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.754456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.767237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.767251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.782263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.782278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.794919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.794933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.807799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.807814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.822289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.822304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.835530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.835545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.850415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.850430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.863112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.863126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:52.875 [2024-09-30 23:06:19.878197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:52.875 [2024-09-30 23:06:19.878212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.891475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.891489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.906329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.906344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.919142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.919158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.931401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.931415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.946015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.946034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.958841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.958856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.971159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.971174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.982666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.982681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:19.994891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:19.994910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.005999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.006015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.018936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.018950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.031273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.031288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.046297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.046313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.059439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.059455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.074297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.074312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.087396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.087410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.102068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.102083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.114790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.114804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.127484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.127499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.136 [2024-09-30 23:06:20.141902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.136 [2024-09-30 23:06:20.141918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.154361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.154377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.167169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.167183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.182082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.182096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.194861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.194881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.206917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.206932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.219641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.219655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.234246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.234261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.247210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.247224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.262374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.262389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.275231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.275245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.290126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.290141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.303048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.303062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.314964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.314979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.327775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.327789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.342342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.342356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.355110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.355125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.367486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.367500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.381910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.381924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.394581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.394595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.396 [2024-09-30 23:06:20.407737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.396 [2024-09-30 23:06:20.407751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.422116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.422130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.435156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.435170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.450193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.450211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 18942.00 IOPS, 147.98 MiB/s [2024-09-30 23:06:20.462990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.463004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.475265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.475278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.490062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.490076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.502917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.502931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.515146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.515161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.526814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.526828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.538763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.538777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.551466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.551480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.566363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.566378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.578982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.578996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.591487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.591501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.606108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.606123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.618733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.618747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.631057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.631071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.645891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.645909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.658905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.658919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.658 [2024-09-30 23:06:20.671356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.658 [2024-09-30 23:06:20.671370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.686364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.686379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.699158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.699172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.713382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.713397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.726737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.726751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.738967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.738982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.750906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.750921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.763711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.763726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.919 [2024-09-30 23:06:20.778466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.919 [2024-09-30 23:06:20.778480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.791697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.791710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.805856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.805870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.818564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.818578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.831374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.831388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.846052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.846066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.859089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.859103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.871541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.871555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.886184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.886199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.899201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.899215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.914186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.914201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:53.920 [2024-09-30 23:06:20.926866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:53.920 [2024-09-30 23:06:20.926881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:20.938501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:20.938516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:20.951552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:20.951565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:20.966109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:20.966123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:20.978686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:20.978700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:20.990824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:20.990838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.003565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.003580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.018246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.018260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.031320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.031334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.046161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.046175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.059131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.059145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.071205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.071218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.086076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.086090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.098390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.098404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.111261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.111275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.126464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.126479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.138901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.138915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.151497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.151512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.166207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.166221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.179038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.179052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.181 [2024-09-30 23:06:21.191507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.181 [2024-09-30 23:06:21.191524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.206290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.206304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.218984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.218998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.230744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.230758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.243855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.243870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.257866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.257880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.270777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.270793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.283586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.283601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.297727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.297742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.310657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.310671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.323217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.323231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.338230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.442 [2024-09-30 23:06:21.338245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.442 [2024-09-30 23:06:21.350984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.350999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.363745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.363759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.378138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.378152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.391243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.391257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.405883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.405902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.418817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.418831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.431325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.431339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.443 [2024-09-30 23:06:21.446202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.443 [2024-09-30 23:06:21.446221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.459062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.459076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 18957.25 IOPS, 148.10 MiB/s [2024-09-30 23:06:21.473755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.473769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.486745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.486760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.499267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.499281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.514161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.514176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.527259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.527273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.542289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.542304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.555170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.555184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.570150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.570166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.582768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.582783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.595119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.595134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.606453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.606468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.619734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.619748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.634126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.634140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.647125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.647140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.658490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.658504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.671426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.671440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.685538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.685552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.698927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.698949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.704 [2024-09-30 23:06:21.710824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.704 [2024-09-30 23:06:21.710838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.723495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.723509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.738203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.738219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.750692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.750707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.762616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.762631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.775470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.775483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.789968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.789982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.803186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.803201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.818147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.818161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.830990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.831005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.843645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.843659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.858039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.858054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.870935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.870949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.883357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.883371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.965 [2024-09-30 23:06:21.898404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.965 [2024-09-30 23:06:21.898419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.911153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.911168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.922791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.922805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.935502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.935516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.949814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.949829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.962878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.962892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:54.966 [2024-09-30 23:06:21.975699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:54.966 [2024-09-30 23:06:21.975714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:21.990196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:21.990211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.003231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.003246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.018086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.018101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.031024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.031038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.045899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.045913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.058761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.058775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.071429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.071443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.086568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.086582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.099222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.099236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.114179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.114194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.126879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.126898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.138891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.138908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.151824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.151839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.166821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.166836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.179454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.179468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.194274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.194289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.207139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.207154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.221836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.221850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.227 [2024-09-30 23:06:22.235045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.227 [2024-09-30 23:06:22.235059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.247902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.247917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.262044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.262058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.275177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.275190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.290088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.290102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.303048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.303063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.315249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.315263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.330246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.330260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.343284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.343298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.358124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.358139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.370814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.370829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.383421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.383435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.398370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.398385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.411131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.411145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.425943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.425958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.438767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.438782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.451067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.451081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 18951.60 IOPS, 148.06 MiB/s [2024-09-30 23:06:22.464920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.464935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 00:37:55.488 Latency(us) 00:37:55.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.488 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:37:55.488 Nvme1n1 : 5.01 18954.37 148.08 0.00 0.00 6747.12 2826.24 12506.45 00:37:55.488 =================================================================================================================== 00:37:55.488 Total : 18954.37 148.08 0.00 0.00 6747.12 2826.24 12506.45 00:37:55.488 [2024-09-30 23:06:22.474898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.474912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.486899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.486914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.488 [2024-09-30 23:06:22.498898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.488 [2024-09-30 23:06:22.498911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.748 [2024-09-30 23:06:22.510900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.748 [2024-09-30 23:06:22.510914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.748 [2024-09-30 23:06:22.522897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.748 [2024-09-30 23:06:22.522908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 [2024-09-30 23:06:22.534890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.749 [2024-09-30 23:06:22.534902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 [2024-09-30 23:06:22.546892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.749 [2024-09-30 23:06:22.546905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 [2024-09-30 23:06:22.558892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.749 [2024-09-30 23:06:22.558904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 [2024-09-30 23:06:22.570892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.749 [2024-09-30 23:06:22.570908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 [2024-09-30 23:06:22.582889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:55.749 [2024-09-30 23:06:22.582900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:55.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (970507) - No such process 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 970507 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:55.749 delay0 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.749 23:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:37:56.009 [2024-09-30 23:06:22.777057] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:02.597 Initializing NVMe Controllers 00:38:02.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:02.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:02.597 Initialization complete. Launching workers. 00:38:02.597 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 10951 00:38:02.597 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11164, failed to submit 77 00:38:02.597 success 11028, unsuccessful 136, failed 0 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.597 rmmod nvme_tcp 00:38:02.597 rmmod nvme_fabrics 00:38:02.597 rmmod nvme_keyring 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 967869 ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 967869 ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 967869' 00:38:02.597 killing process with pid 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 967869 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:02.597 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:02.859 23:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:04.776 00:38:04.776 real 0m34.044s 00:38:04.776 user 0m42.823s 00:38:04.776 sys 0m12.761s 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:04.776 ************************************ 00:38:04.776 END TEST nvmf_zcopy 00:38:04.776 ************************************ 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:04.776 ************************************ 00:38:04.776 START TEST nvmf_nmic 00:38:04.776 ************************************ 00:38:04.776 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:05.038 * Looking for test storage... 00:38:05.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.038 --rc genhtml_branch_coverage=1 00:38:05.038 --rc genhtml_function_coverage=1 00:38:05.038 --rc genhtml_legend=1 00:38:05.038 --rc geninfo_all_blocks=1 00:38:05.038 --rc geninfo_unexecuted_blocks=1 00:38:05.038 00:38:05.038 ' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.038 --rc genhtml_branch_coverage=1 00:38:05.038 --rc genhtml_function_coverage=1 00:38:05.038 --rc genhtml_legend=1 00:38:05.038 --rc geninfo_all_blocks=1 00:38:05.038 --rc geninfo_unexecuted_blocks=1 00:38:05.038 00:38:05.038 ' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.038 --rc genhtml_branch_coverage=1 00:38:05.038 --rc genhtml_function_coverage=1 00:38:05.038 --rc genhtml_legend=1 00:38:05.038 --rc geninfo_all_blocks=1 00:38:05.038 --rc geninfo_unexecuted_blocks=1 00:38:05.038 00:38:05.038 ' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.038 --rc genhtml_branch_coverage=1 00:38:05.038 --rc genhtml_function_coverage=1 00:38:05.038 --rc genhtml_legend=1 00:38:05.038 --rc geninfo_all_blocks=1 00:38:05.038 --rc geninfo_unexecuted_blocks=1 00:38:05.038 00:38:05.038 ' 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.038 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.039 23:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:05.039 23:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:13.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:13.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:13.182 Found net devices under 0000:31:00.0: cvl_0_0 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:13.182 Found net devices under 0000:31:00.1: cvl_0_1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:38:13.182 00:38:13.182 --- 10.0.0.2 ping statistics --- 00:38:13.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.182 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:38:13.182 00:38:13.182 --- 10.0.0.1 ping statistics --- 00:38:13.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.182 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:13.182 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=977012 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 977012 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 977012 ']' 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.183 23:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.183 [2024-09-30 23:06:39.567466] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:13.183 [2024-09-30 23:06:39.568448] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:38:13.183 [2024-09-30 23:06:39.568484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.183 [2024-09-30 23:06:39.653703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.183 [2024-09-30 23:06:39.722056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.183 [2024-09-30 23:06:39.722093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.183 [2024-09-30 23:06:39.722101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.183 [2024-09-30 23:06:39.722109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.183 [2024-09-30 23:06:39.722115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.183 [2024-09-30 23:06:39.722271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.183 [2024-09-30 23:06:39.722416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.183 [2024-09-30 23:06:39.722567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.183 [2024-09-30 23:06:39.722568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.183 [2024-09-30 23:06:39.797982] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.183 [2024-09-30 23:06:39.799172] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:13.183 [2024-09-30 23:06:39.799483] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.183 [2024-09-30 23:06:39.800055] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:13.183 [2024-09-30 23:06:39.800110] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.443 [2024-09-30 23:06:40.415326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.443 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.703 Malloc0 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.703 [2024-09-30 23:06:40.495561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:13.703 test case1: single bdev can't be used in multiple subsystems 00:38:13.703 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.704 [2024-09-30 23:06:40.518970] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:13.704 [2024-09-30 23:06:40.518998] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:13.704 [2024-09-30 23:06:40.519009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.704 request: 00:38:13.704 { 00:38:13.704 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:13.704 "namespace": { 00:38:13.704 "bdev_name": "Malloc0", 00:38:13.704 "no_auto_visible": false 00:38:13.704 }, 00:38:13.704 "method": "nvmf_subsystem_add_ns", 00:38:13.704 "req_id": 1 00:38:13.704 } 00:38:13.704 Got JSON-RPC error response 00:38:13.704 response: 00:38:13.704 { 00:38:13.704 "code": -32602, 00:38:13.704 "message": "Invalid parameters" 00:38:13.704 } 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:13.704 Adding namespace failed - expected result. 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:13.704 test case2: host connect to nvmf target in multiple paths 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:13.704 [2024-09-30 23:06:40.531056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.704 23:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:14.031 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:14.649 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:14.649 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:38:14.649 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:14.649 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:38:14.649 23:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:38:16.584 23:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:16.584 [global] 00:38:16.584 thread=1 00:38:16.584 invalidate=1 00:38:16.584 rw=write 00:38:16.584 time_based=1 00:38:16.584 runtime=1 00:38:16.584 ioengine=libaio 00:38:16.584 direct=1 00:38:16.584 bs=4096 00:38:16.584 iodepth=1 00:38:16.584 norandommap=0 00:38:16.584 numjobs=1 00:38:16.584 00:38:16.584 verify_dump=1 00:38:16.584 verify_backlog=512 00:38:16.584 verify_state_save=0 00:38:16.584 do_verify=1 00:38:16.584 verify=crc32c-intel 00:38:16.584 [job0] 00:38:16.584 filename=/dev/nvme0n1 00:38:16.584 Could not set queue depth (nvme0n1) 00:38:16.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:16.844 fio-3.35 00:38:16.844 Starting 1 thread 00:38:18.226 00:38:18.226 job0: (groupid=0, jobs=1): err= 0: pid=978104: Mon Sep 30 23:06:44 2024 00:38:18.226 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:18.226 slat (nsec): min=7086, max=62606, avg=27845.76, stdev=2025.93 00:38:18.226 clat (usec): min=649, max=1058, avg=948.50, stdev=48.81 00:38:18.226 lat (usec): min=656, max=1086, avg=976.35, stdev=49.04 00:38:18.226 clat percentiles (usec): 00:38:18.226 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 930], 00:38:18.226 | 30.00th=[ 938], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 963], 00:38:18.226 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1004], 00:38:18.226 | 99.00th=[ 1029], 99.50th=[ 1037], 99.90th=[ 1057], 99.95th=[ 1057], 00:38:18.226 | 99.99th=[ 1057] 00:38:18.226 write: IOPS=812, BW=3249KiB/s (3327kB/s)(3252KiB/1001msec); 0 zone resets 00:38:18.226 slat (usec): min=9, max=30486, avg=67.84, stdev=1068.20 00:38:18.226 clat (usec): min=122, max=801, avg=535.08, stdev=100.92 00:38:18.226 lat (usec): min=136, max=31152, avg=602.93, stdev=1078.08 00:38:18.226 clat percentiles (usec): 00:38:18.226 | 1.00th=[ 237], 5.00th=[ 347], 10.00th=[ 404], 20.00th=[ 453], 00:38:18.226 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 553], 00:38:18.226 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 685], 00:38:18.226 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 799], 00:38:18.226 | 99.99th=[ 799] 00:38:18.226 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:18.226 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:18.226 lat (usec) : 250=0.83%, 500=15.92%, 750=44.68%, 1000=35.77% 00:38:18.226 lat (msec) : 2=2.79% 00:38:18.226 cpu : usr=4.00%, sys=3.90%, ctx=1328, majf=0, minf=1 00:38:18.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.226 issued rwts: total=512,813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:18.226 00:38:18.226 Run status group 0 (all jobs): 00:38:18.226 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:38:18.226 WRITE: bw=3249KiB/s (3327kB/s), 3249KiB/s-3249KiB/s (3327kB/s-3327kB/s), io=3252KiB (3330kB), run=1001-1001msec 00:38:18.226 00:38:18.226 Disk stats (read/write): 00:38:18.226 nvme0n1: ios=537/639, merge=0/0, ticks=1453/270, in_queue=1723, util=98.80% 00:38:18.226 23:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:18.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:18.226 rmmod nvme_tcp 00:38:18.226 rmmod nvme_fabrics 00:38:18.226 rmmod nvme_keyring 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 977012 ']' 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 977012 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 977012 ']' 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 977012 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:18.226 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 977012 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 977012' 00:38:18.487 killing process with pid 977012 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 977012 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 977012 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.487 23:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:21.032 00:38:21.032 real 0m15.711s 00:38:21.032 user 0m37.808s 00:38:21.032 sys 0m7.450s 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:21.032 ************************************ 00:38:21.032 END TEST nvmf_nmic 00:38:21.032 ************************************ 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:21.032 ************************************ 00:38:21.032 START TEST nvmf_fio_target 00:38:21.032 ************************************ 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:21.032 * Looking for test storage... 00:38:21.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:21.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.032 --rc genhtml_branch_coverage=1 00:38:21.032 --rc genhtml_function_coverage=1 00:38:21.032 --rc genhtml_legend=1 00:38:21.032 --rc geninfo_all_blocks=1 00:38:21.032 --rc geninfo_unexecuted_blocks=1 00:38:21.032 00:38:21.032 ' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:21.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.032 --rc genhtml_branch_coverage=1 00:38:21.032 --rc genhtml_function_coverage=1 00:38:21.032 --rc genhtml_legend=1 00:38:21.032 --rc geninfo_all_blocks=1 00:38:21.032 --rc geninfo_unexecuted_blocks=1 00:38:21.032 00:38:21.032 ' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:21.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.032 --rc genhtml_branch_coverage=1 00:38:21.032 --rc genhtml_function_coverage=1 00:38:21.032 --rc genhtml_legend=1 00:38:21.032 --rc geninfo_all_blocks=1 00:38:21.032 --rc geninfo_unexecuted_blocks=1 00:38:21.032 00:38:21.032 ' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:21.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.032 --rc genhtml_branch_coverage=1 00:38:21.032 --rc genhtml_function_coverage=1 00:38:21.032 --rc genhtml_legend=1 00:38:21.032 --rc geninfo_all_blocks=1 00:38:21.032 --rc geninfo_unexecuted_blocks=1 00:38:21.032 00:38:21.032 ' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.032 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:21.033 23:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:29.177 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:29.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:29.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:29.178 Found net devices under 0000:31:00.0: cvl_0_0 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:29.178 Found net devices under 0000:31:00.1: cvl_0_1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:29.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:29.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:38:29.178 00:38:29.178 --- 10.0.0.2 ping statistics --- 00:38:29.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.178 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:29.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:29.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:38:29.178 00:38:29.178 --- 10.0.0.1 ping statistics --- 00:38:29.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.178 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=982511 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 982511 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 982511 ']' 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:29.178 23:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:29.178 [2024-09-30 23:06:55.507273] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:29.178 [2024-09-30 23:06:55.508429] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:38:29.179 [2024-09-30 23:06:55.508480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.179 [2024-09-30 23:06:55.600485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:29.179 [2024-09-30 23:06:55.695920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:29.179 [2024-09-30 23:06:55.695987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:29.179 [2024-09-30 23:06:55.695996] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:29.179 [2024-09-30 23:06:55.696003] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:29.179 [2024-09-30 23:06:55.696010] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:29.179 [2024-09-30 23:06:55.696203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.179 [2024-09-30 23:06:55.696370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:29.179 [2024-09-30 23:06:55.696531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.179 [2024-09-30 23:06:55.696532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:29.179 [2024-09-30 23:06:55.783476] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:29.179 [2024-09-30 23:06:55.784002] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:29.179 [2024-09-30 23:06:55.784522] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:29.179 [2024-09-30 23:06:55.785100] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:29.179 [2024-09-30 23:06:55.785110] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:29.439 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:29.700 [2024-09-30 23:06:56.549709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.700 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:29.960 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:29.960 23:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:30.222 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:30.222 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:30.222 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:30.222 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:30.483 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:30.483 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:30.744 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:31.006 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:31.006 23:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:31.267 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:31.267 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:31.267 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:31.267 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:31.528 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:31.790 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:31.790 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:31.790 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:31.790 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:32.052 23:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.312 [2024-09-30 23:06:59.153704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.312 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:32.573 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:32.573 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:38:33.144 23:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:38:35.058 23:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:35.058 [global] 00:38:35.058 thread=1 00:38:35.058 invalidate=1 00:38:35.058 rw=write 00:38:35.058 time_based=1 00:38:35.058 runtime=1 00:38:35.058 ioengine=libaio 00:38:35.058 direct=1 00:38:35.058 bs=4096 00:38:35.058 iodepth=1 00:38:35.058 norandommap=0 00:38:35.058 numjobs=1 00:38:35.058 00:38:35.058 verify_dump=1 00:38:35.058 verify_backlog=512 00:38:35.058 verify_state_save=0 00:38:35.058 do_verify=1 00:38:35.058 verify=crc32c-intel 00:38:35.058 [job0] 00:38:35.058 filename=/dev/nvme0n1 00:38:35.058 [job1] 00:38:35.058 filename=/dev/nvme0n2 00:38:35.058 [job2] 00:38:35.058 filename=/dev/nvme0n3 00:38:35.058 [job3] 00:38:35.058 filename=/dev/nvme0n4 00:38:35.343 Could not set queue depth (nvme0n1) 00:38:35.343 Could not set queue depth (nvme0n2) 00:38:35.343 Could not set queue depth (nvme0n3) 00:38:35.343 Could not set queue depth (nvme0n4) 00:38:35.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:35.606 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:35.606 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:35.606 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:35.606 fio-3.35 00:38:35.606 Starting 4 threads 00:38:37.009 00:38:37.009 job0: (groupid=0, jobs=1): err= 0: pid=984086: Mon Sep 30 23:07:03 2024 00:38:37.009 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:38:37.009 slat (nsec): min=26718, max=27302, avg=26941.26, stdev=169.25 00:38:37.009 clat (usec): min=40863, max=41072, avg=40959.43, stdev=53.28 00:38:37.009 lat (usec): min=40890, max=41099, avg=40986.37, stdev=53.35 00:38:37.009 clat percentiles (usec): 00:38:37.009 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:38:37.009 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:37.009 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:37.009 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:37.009 | 99.99th=[41157] 00:38:37.009 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:38:37.009 slat (usec): min=9, max=859, avg=30.83, stdev=38.67 00:38:37.009 clat (usec): min=138, max=764, avg=401.70, stdev=114.76 00:38:37.009 lat (usec): min=148, max=1316, avg=432.53, stdev=126.52 00:38:37.009 clat percentiles (usec): 00:38:37.009 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 253], 20.00th=[ 310], 00:38:37.009 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 429], 00:38:37.009 | 70.00th=[ 461], 80.00th=[ 502], 90.00th=[ 570], 95.00th=[ 611], 00:38:37.009 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 766], 99.95th=[ 766], 00:38:37.009 | 99.99th=[ 766] 00:38:37.009 bw ( KiB/s): min= 4087, max= 4087, per=47.56%, avg=4087.00, stdev= 0.00, samples=1 00:38:37.009 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:38:37.009 lat (usec) : 250=9.23%, 500=67.80%, 750=19.21%, 1000=0.19% 00:38:37.009 lat (msec) : 50=3.58% 00:38:37.009 cpu : usr=0.80%, sys=1.39%, ctx=534, majf=0, minf=1 00:38:37.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.009 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.009 job1: (groupid=0, jobs=1): err= 0: pid=984093: Mon Sep 30 23:07:03 2024 00:38:37.009 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:37.009 slat (nsec): min=7284, max=61000, avg=26450.31, stdev=3447.50 00:38:37.009 clat (usec): min=733, max=1175, avg=991.83, stdev=75.41 00:38:37.009 lat (usec): min=759, max=1201, avg=1018.28, stdev=75.62 00:38:37.009 clat percentiles (usec): 00:38:37.009 | 1.00th=[ 791], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:38:37.009 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:38:37.009 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:38:37.009 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1172], 99.95th=[ 1172], 00:38:37.009 | 99.99th=[ 1172] 00:38:37.009 write: IOPS=693, BW=2773KiB/s (2840kB/s)(2776KiB/1001msec); 0 zone resets 00:38:37.009 slat (nsec): min=9963, max=67165, avg=31574.79, stdev=9237.09 00:38:37.009 clat (usec): min=251, max=918, avg=639.17, stdev=103.73 00:38:37.009 lat (usec): min=263, max=952, avg=670.75, stdev=107.72 00:38:37.009 clat percentiles (usec): 00:38:37.009 | 1.00th=[ 379], 5.00th=[ 445], 10.00th=[ 490], 20.00th=[ 562], 00:38:37.009 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:38:37.009 | 70.00th=[ 709], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:38:37.009 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:38:37.009 | 99.99th=[ 922] 00:38:37.009 bw ( KiB/s): min= 4096, max= 4096, per=47.66%, avg=4096.00, stdev= 0.00, samples=1 00:38:37.009 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:37.009 lat (usec) : 500=6.63%, 750=44.44%, 1000=28.28% 00:38:37.009 lat (msec) : 2=20.65% 00:38:37.009 cpu : usr=1.40%, sys=4.10%, ctx=1208, majf=0, minf=1 00:38:37.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.009 issued rwts: total=512,694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.009 job2: (groupid=0, jobs=1): err= 0: pid=984094: Mon Sep 30 23:07:03 2024 00:38:37.009 read: IOPS=55, BW=224KiB/s (229kB/s)(232KiB/1038msec) 00:38:37.009 slat (nsec): min=7544, max=33961, avg=23635.57, stdev=7498.63 00:38:37.009 clat (usec): min=453, max=42110, avg=12874.75, stdev=18870.11 00:38:37.009 lat (usec): min=481, max=42137, avg=12898.39, stdev=18872.33 00:38:37.009 clat percentiles (usec): 00:38:37.009 | 1.00th=[ 453], 5.00th=[ 498], 10.00th=[ 603], 20.00th=[ 734], 00:38:37.009 | 30.00th=[ 816], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 979], 00:38:37.009 | 70.00th=[ 1254], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:37.009 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:37.009 | 99.99th=[42206] 00:38:37.009 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:38:37.009 slat (usec): min=6, max=31120, avg=85.13, stdev=1377.06 00:38:37.010 clat (usec): min=115, max=909, avg=467.97, stdev=180.11 00:38:37.010 lat (usec): min=126, max=31952, avg=553.10, stdev=1405.40 00:38:37.010 clat percentiles (usec): 00:38:37.010 | 1.00th=[ 118], 5.00th=[ 131], 10.00th=[ 253], 20.00th=[ 293], 00:38:37.010 | 30.00th=[ 343], 40.00th=[ 404], 50.00th=[ 482], 60.00th=[ 529], 00:38:37.010 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 709], 95.00th=[ 750], 00:38:37.010 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 914], 99.95th=[ 914], 00:38:37.010 | 99.99th=[ 914] 00:38:37.010 bw ( KiB/s): min= 4096, max= 4096, per=47.66%, avg=4096.00, stdev= 0.00, samples=1 00:38:37.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:37.010 lat (usec) : 250=8.60%, 500=40.88%, 750=38.07%, 1000=9.12% 00:38:37.010 lat (msec) : 2=0.35%, 50=2.98% 00:38:37.010 cpu : usr=0.10%, sys=1.45%, ctx=574, majf=0, minf=1 00:38:37.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.010 issued rwts: total=58,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.010 job3: (groupid=0, jobs=1): err= 0: pid=984095: Mon Sep 30 23:07:03 2024 00:38:37.010 read: IOPS=192, BW=771KiB/s (790kB/s)(772KiB/1001msec) 00:38:37.010 slat (nsec): min=7636, max=53169, avg=26078.32, stdev=6820.37 00:38:37.010 clat (usec): min=451, max=42621, avg=3781.28, stdev=10710.67 00:38:37.010 lat (usec): min=478, max=42648, avg=3807.36, stdev=10710.96 00:38:37.010 clat percentiles (usec): 00:38:37.010 | 1.00th=[ 453], 5.00th=[ 498], 10.00th=[ 553], 20.00th=[ 635], 00:38:37.010 | 30.00th=[ 701], 40.00th=[ 791], 50.00th=[ 873], 60.00th=[ 898], 00:38:37.010 | 70.00th=[ 930], 80.00th=[ 971], 90.00th=[ 1012], 95.00th=[41681], 00:38:37.010 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:37.010 | 99.99th=[42730] 00:38:37.010 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:38:37.010 slat (nsec): min=3893, max=57074, avg=27629.54, stdev=11615.15 00:38:37.010 clat (usec): min=126, max=957, avg=474.58, stdev=154.50 00:38:37.010 lat (usec): min=140, max=994, avg=502.21, stdev=159.37 00:38:37.010 clat percentiles (usec): 00:38:37.010 | 1.00th=[ 198], 5.00th=[ 255], 10.00th=[ 285], 20.00th=[ 318], 00:38:37.010 | 30.00th=[ 371], 40.00th=[ 420], 50.00th=[ 474], 60.00th=[ 515], 00:38:37.010 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 676], 95.00th=[ 750], 00:38:37.010 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 955], 99.95th=[ 955], 00:38:37.010 | 99.99th=[ 955] 00:38:37.010 bw ( KiB/s): min= 4096, max= 4096, per=47.66%, avg=4096.00, stdev= 0.00, samples=1 00:38:37.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:37.010 lat (usec) : 250=2.84%, 500=39.57%, 750=36.17%, 1000=17.87% 00:38:37.010 lat (msec) : 2=1.56%, 50=1.99% 00:38:37.010 cpu : usr=1.40%, sys=1.50%, ctx=706, majf=0, minf=1 00:38:37.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.010 issued rwts: total=193,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.010 00:38:37.010 Run status group 0 (all jobs): 00:38:37.010 READ: bw=3013KiB/s (3086kB/s), 75.6KiB/s-2046KiB/s (77.4kB/s-2095kB/s), io=3128KiB (3203kB), run=1001-1038msec 00:38:37.010 WRITE: bw=8593KiB/s (8800kB/s), 1973KiB/s-2773KiB/s (2020kB/s-2840kB/s), io=8920KiB (9134kB), run=1001-1038msec 00:38:37.010 00:38:37.010 Disk stats (read/write): 00:38:37.010 nvme0n1: ios=63/512, merge=0/0, ticks=680/196, in_queue=876, util=83.77% 00:38:37.010 nvme0n2: ios=496/512, merge=0/0, ticks=1328/327, in_queue=1655, util=87.44% 00:38:37.010 nvme0n3: ios=112/512, merge=0/0, ticks=1109/226, in_queue=1335, util=94.29% 00:38:37.010 nvme0n4: ios=184/512, merge=0/0, ticks=681/224, in_queue=905, util=97.54% 00:38:37.010 23:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:37.010 [global] 00:38:37.010 thread=1 00:38:37.010 invalidate=1 00:38:37.010 rw=randwrite 00:38:37.010 time_based=1 00:38:37.010 runtime=1 00:38:37.010 ioengine=libaio 00:38:37.010 direct=1 00:38:37.010 bs=4096 00:38:37.010 iodepth=1 00:38:37.010 norandommap=0 00:38:37.010 numjobs=1 00:38:37.010 00:38:37.010 verify_dump=1 00:38:37.010 verify_backlog=512 00:38:37.010 verify_state_save=0 00:38:37.010 do_verify=1 00:38:37.010 verify=crc32c-intel 00:38:37.010 [job0] 00:38:37.010 filename=/dev/nvme0n1 00:38:37.010 [job1] 00:38:37.010 filename=/dev/nvme0n2 00:38:37.010 [job2] 00:38:37.010 filename=/dev/nvme0n3 00:38:37.010 [job3] 00:38:37.010 filename=/dev/nvme0n4 00:38:37.010 Could not set queue depth (nvme0n1) 00:38:37.010 Could not set queue depth (nvme0n2) 00:38:37.010 Could not set queue depth (nvme0n3) 00:38:37.010 Could not set queue depth (nvme0n4) 00:38:37.271 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:37.271 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:37.271 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:37.271 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:37.271 fio-3.35 00:38:37.271 Starting 4 threads 00:38:38.673 00:38:38.673 job0: (groupid=0, jobs=1): err= 0: pid=984609: Mon Sep 30 23:07:05 2024 00:38:38.673 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:38:38.673 slat (nsec): min=24204, max=25334, avg=24887.29, stdev=252.09 00:38:38.673 clat (usec): min=1158, max=42106, avg=39544.70, stdev=9892.34 00:38:38.673 lat (usec): min=1183, max=42130, avg=39569.59, stdev=9892.35 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41681], 20.00th=[41681], 00:38:38.673 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:38:38.673 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:38.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:38.673 | 99.99th=[42206] 00:38:38.673 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:38:38.673 slat (nsec): min=9232, max=49820, avg=27034.73, stdev=9337.68 00:38:38.673 clat (usec): min=273, max=976, avg=629.63, stdev=128.26 00:38:38.673 lat (usec): min=284, max=1007, avg=656.67, stdev=133.14 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 326], 5.00th=[ 383], 10.00th=[ 453], 20.00th=[ 529], 00:38:38.673 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 676], 00:38:38.673 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 816], 00:38:38.673 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 979], 99.95th=[ 979], 00:38:38.673 | 99.99th=[ 979] 00:38:38.673 bw ( KiB/s): min= 4096, max= 4096, per=40.96%, avg=4096.00, stdev= 0.00, samples=1 00:38:38.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:38.673 lat (usec) : 500=15.69%, 750=65.03%, 1000=16.07% 00:38:38.673 lat (msec) : 2=0.19%, 50=3.02% 00:38:38.673 cpu : usr=0.69%, sys=1.38%, ctx=529, majf=0, minf=2 00:38:38.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:38.673 job1: (groupid=0, jobs=1): err= 0: pid=984610: Mon Sep 30 23:07:05 2024 00:38:38.673 read: IOPS=207, BW=829KiB/s (849kB/s)(844KiB/1018msec) 00:38:38.673 slat (nsec): min=7305, max=59023, avg=25413.78, stdev=5732.89 00:38:38.673 clat (usec): min=211, max=42011, avg=3489.60, stdev=10135.20 00:38:38.673 lat (usec): min=237, max=42038, avg=3515.01, stdev=10136.09 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 445], 5.00th=[ 537], 10.00th=[ 603], 20.00th=[ 685], 00:38:38.673 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 832], 60.00th=[ 857], 00:38:38.673 | 70.00th=[ 889], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[41157], 00:38:38.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:38.673 | 99.99th=[42206] 00:38:38.673 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:38:38.673 slat (usec): min=9, max=37100, avg=101.66, stdev=1638.36 00:38:38.673 clat (usec): min=119, max=783, avg=426.05, stdev=118.79 00:38:38.673 lat (usec): min=131, max=37547, avg=527.70, stdev=1643.81 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 293], 20.00th=[ 322], 00:38:38.673 | 30.00th=[ 347], 40.00th=[ 375], 50.00th=[ 424], 60.00th=[ 457], 00:38:38.673 | 70.00th=[ 486], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 627], 00:38:38.673 | 99.00th=[ 685], 99.50th=[ 750], 99.90th=[ 783], 99.95th=[ 783], 00:38:38.673 | 99.99th=[ 783] 00:38:38.673 bw ( KiB/s): min= 4096, max= 4096, per=40.96%, avg=4096.00, stdev= 0.00, samples=1 00:38:38.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:38.673 lat (usec) : 250=4.29%, 500=48.82%, 750=26.56%, 1000=17.98% 00:38:38.673 lat (msec) : 2=0.28%, 4=0.14%, 50=1.94% 00:38:38.673 cpu : usr=0.88%, sys=2.16%, ctx=725, majf=0, minf=1 00:38:38.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 issued rwts: total=211,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:38.673 job2: (groupid=0, jobs=1): err= 0: pid=984611: Mon Sep 30 23:07:05 2024 00:38:38.673 read: IOPS=712, BW=2849KiB/s (2918kB/s)(2852KiB/1001msec) 00:38:38.673 slat (nsec): min=6517, max=58181, avg=25521.55, stdev=8581.83 00:38:38.673 clat (usec): min=351, max=949, avg=709.84, stdev=104.84 00:38:38.673 lat (usec): min=359, max=978, avg=735.36, stdev=107.81 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 457], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 619], 00:38:38.673 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 742], 00:38:38.673 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 865], 00:38:38.673 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 947], 99.95th=[ 947], 00:38:38.673 | 99.99th=[ 947] 00:38:38.673 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:38:38.673 slat (nsec): min=7800, max=69469, avg=31098.15, stdev=10585.31 00:38:38.673 clat (usec): min=123, max=834, avg=421.22, stdev=129.32 00:38:38.673 lat (usec): min=131, max=871, avg=452.32, stdev=133.05 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 174], 5.00th=[ 223], 10.00th=[ 269], 20.00th=[ 302], 00:38:38.673 | 30.00th=[ 330], 40.00th=[ 383], 50.00th=[ 412], 60.00th=[ 445], 00:38:38.673 | 70.00th=[ 486], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 644], 00:38:38.673 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 832], 00:38:38.673 | 99.99th=[ 832] 00:38:38.673 bw ( KiB/s): min= 4096, max= 4096, per=40.96%, avg=4096.00, stdev= 0.00, samples=1 00:38:38.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:38.673 lat (usec) : 250=4.84%, 500=39.55%, 750=39.84%, 1000=15.77% 00:38:38.673 cpu : usr=3.80%, sys=6.30%, ctx=1740, majf=0, minf=1 00:38:38.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.673 issued rwts: total=713,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:38.673 job3: (groupid=0, jobs=1): err= 0: pid=984613: Mon Sep 30 23:07:05 2024 00:38:38.673 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:38:38.673 slat (nsec): min=14935, max=31123, avg=26618.65, stdev=3911.17 00:38:38.673 clat (usec): min=1061, max=42029, avg=39305.83, stdev=9862.72 00:38:38.673 lat (usec): min=1088, max=42060, avg=39332.44, stdev=9862.87 00:38:38.673 clat percentiles (usec): 00:38:38.673 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41157], 00:38:38.673 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:38.673 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:38.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:38.673 | 99.99th=[42206] 00:38:38.673 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:38:38.673 slat (nsec): min=9938, max=63009, avg=32150.88, stdev=7245.47 00:38:38.673 clat (usec): min=184, max=1128, avg=652.50, stdev=160.97 00:38:38.673 lat (usec): min=197, max=1162, avg=684.65, stdev=162.37 00:38:38.673 clat percentiles (usec): 00:38:38.674 | 1.00th=[ 269], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 519], 00:38:38.674 | 30.00th=[ 570], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:38:38.674 | 70.00th=[ 725], 80.00th=[ 791], 90.00th=[ 865], 95.00th=[ 930], 00:38:38.674 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1123], 99.95th=[ 1123], 00:38:38.674 | 99.99th=[ 1123] 00:38:38.674 bw ( KiB/s): min= 4096, max= 4096, per=40.96%, avg=4096.00, stdev= 0.00, samples=1 00:38:38.674 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:38.674 lat (usec) : 250=0.95%, 500=14.93%, 750=56.14%, 1000=22.68% 00:38:38.674 lat (msec) : 2=2.27%, 50=3.02% 00:38:38.674 cpu : usr=0.88%, sys=1.56%, ctx=530, majf=0, minf=1 00:38:38.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.674 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:38.674 00:38:38.674 Run status group 0 (all jobs): 00:38:38.674 READ: bw=3742KiB/s (3832kB/s), 66.4KiB/s-2849KiB/s (68.0kB/s-2918kB/s), io=3832KiB (3924kB), run=1001-1024msec 00:38:38.674 WRITE: bw=9.77MiB/s (10.2MB/s), 2000KiB/s-4092KiB/s (2048kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1024msec 00:38:38.674 00:38:38.674 Disk stats (read/write): 00:38:38.674 nvme0n1: ios=62/512, merge=0/0, ticks=508/313, in_queue=821, util=86.67% 00:38:38.674 nvme0n2: ios=256/512, merge=0/0, ticks=1256/202, in_queue=1458, util=97.15% 00:38:38.674 nvme0n3: ios=551/1012, merge=0/0, ticks=1191/322, in_queue=1513, util=96.41% 00:38:38.674 nvme0n4: ios=45/512, merge=0/0, ticks=1522/321, in_queue=1843, util=97.22% 00:38:38.674 23:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:38:38.674 [global] 00:38:38.674 thread=1 00:38:38.674 invalidate=1 00:38:38.674 rw=write 00:38:38.674 time_based=1 00:38:38.674 runtime=1 00:38:38.674 ioengine=libaio 00:38:38.674 direct=1 00:38:38.674 bs=4096 00:38:38.674 iodepth=128 00:38:38.674 norandommap=0 00:38:38.674 numjobs=1 00:38:38.674 00:38:38.674 verify_dump=1 00:38:38.674 verify_backlog=512 00:38:38.674 verify_state_save=0 00:38:38.674 do_verify=1 00:38:38.674 verify=crc32c-intel 00:38:38.674 [job0] 00:38:38.674 filename=/dev/nvme0n1 00:38:38.674 [job1] 00:38:38.674 filename=/dev/nvme0n2 00:38:38.674 [job2] 00:38:38.674 filename=/dev/nvme0n3 00:38:38.674 [job3] 00:38:38.674 filename=/dev/nvme0n4 00:38:38.674 Could not set queue depth (nvme0n1) 00:38:38.674 Could not set queue depth (nvme0n2) 00:38:38.674 Could not set queue depth (nvme0n3) 00:38:38.674 Could not set queue depth (nvme0n4) 00:38:38.936 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:38.936 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:38.936 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:38.936 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:38.936 fio-3.35 00:38:38.936 Starting 4 threads 00:38:40.357 00:38:40.357 job0: (groupid=0, jobs=1): err= 0: pid=985136: Mon Sep 30 23:07:07 2024 00:38:40.357 read: IOPS=5548, BW=21.7MiB/s (22.7MB/s)(22.0MiB/1015msec) 00:38:40.357 slat (nsec): min=887, max=11535k, avg=76860.83, stdev=620922.98 00:38:40.357 clat (usec): min=3767, max=35265, avg=11199.30, stdev=5747.25 00:38:40.357 lat (usec): min=3776, max=35272, avg=11276.16, stdev=5789.93 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6915], 00:38:40.357 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[10028], 00:38:40.357 | 70.00th=[12125], 80.00th=[14222], 90.00th=[20841], 95.00th=[23462], 00:38:40.357 | 99.00th=[29754], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:38:40.357 | 99.99th=[35390] 00:38:40.357 write: IOPS=5918, BW=23.1MiB/s (24.2MB/s)(23.5MiB/1015msec); 0 zone resets 00:38:40.357 slat (nsec): min=1604, max=16508k, avg=72883.20, stdev=688063.19 00:38:40.357 clat (usec): min=404, max=40378, avg=10942.21, stdev=6423.18 00:38:40.357 lat (usec): min=417, max=40389, avg=11015.09, stdev=6470.10 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 1004], 5.00th=[ 3261], 10.00th=[ 4817], 20.00th=[ 6194], 00:38:40.357 | 30.00th=[ 6849], 40.00th=[ 7767], 50.00th=[ 8979], 60.00th=[10290], 00:38:40.357 | 70.00th=[13435], 80.00th=[16581], 90.00th=[19792], 95.00th=[22414], 00:38:40.357 | 99.00th=[27919], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:38:40.357 | 99.99th=[40633] 00:38:40.357 bw ( KiB/s): min=22456, max=24576, per=27.41%, avg=23516.00, stdev=1499.07, samples=2 00:38:40.357 iops : min= 5614, max= 6144, avg=5879.00, stdev=374.77, samples=2 00:38:40.357 lat (usec) : 500=0.06%, 750=0.22%, 1000=0.23% 00:38:40.357 lat (msec) : 2=1.07%, 4=1.63%, 10=55.56%, 20=31.28%, 50=9.94% 00:38:40.357 cpu : usr=4.04%, sys=5.82%, ctx=345, majf=0, minf=1 00:38:40.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:38:40.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:40.357 issued rwts: total=5632,6007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:40.357 job1: (groupid=0, jobs=1): err= 0: pid=985138: Mon Sep 30 23:07:07 2024 00:38:40.357 read: IOPS=2457, BW=9831KiB/s (10.1MB/s)(9900KiB/1007msec) 00:38:40.357 slat (nsec): min=977, max=21373k, avg=142559.59, stdev=1086877.61 00:38:40.357 clat (usec): min=3633, max=43531, avg=17503.26, stdev=8736.68 00:38:40.357 lat (usec): min=3638, max=43540, avg=17645.82, stdev=8811.27 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 5735], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7767], 00:38:40.357 | 30.00th=[10683], 40.00th=[14091], 50.00th=[16450], 60.00th=[20579], 00:38:40.357 | 70.00th=[22676], 80.00th=[25560], 90.00th=[28967], 95.00th=[32900], 00:38:40.357 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:38:40.357 | 99.99th=[43779] 00:38:40.357 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:38:40.357 slat (nsec): min=1614, max=15381k, avg=247905.15, stdev=1193510.92 00:38:40.357 clat (usec): min=1144, max=96610, avg=32990.75, stdev=22045.26 00:38:40.357 lat (usec): min=1155, max=96614, avg=33238.66, stdev=22181.84 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 6128], 5.00th=[10683], 10.00th=[11731], 20.00th=[13435], 00:38:40.357 | 30.00th=[17695], 40.00th=[20579], 50.00th=[22938], 60.00th=[31851], 00:38:40.357 | 70.00th=[39060], 80.00th=[56361], 90.00th=[65274], 95.00th=[76022], 00:38:40.357 | 99.00th=[89654], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:38:40.357 | 99.99th=[96994] 00:38:40.357 bw ( KiB/s): min= 8208, max=12272, per=11.94%, avg=10240.00, stdev=2873.68, samples=2 00:38:40.357 iops : min= 2052, max= 3068, avg=2560.00, stdev=718.42, samples=2 00:38:40.357 lat (msec) : 2=0.04%, 4=0.40%, 10=15.23%, 20=31.24%, 50=39.44% 00:38:40.357 lat (msec) : 100=13.64% 00:38:40.357 cpu : usr=3.08%, sys=1.99%, ctx=255, majf=0, minf=2 00:38:40.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:38:40.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:40.357 issued rwts: total=2475,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:40.357 job2: (groupid=0, jobs=1): err= 0: pid=985140: Mon Sep 30 23:07:07 2024 00:38:40.357 read: IOPS=9703, BW=37.9MiB/s (39.7MB/s)(39.6MiB/1044msec) 00:38:40.357 slat (nsec): min=1357, max=7435.2k, avg=42022.00, stdev=336391.86 00:38:40.357 clat (usec): min=1094, max=52131, avg=7355.31, stdev=5883.68 00:38:40.357 lat (usec): min=1111, max=52136, avg=7397.33, stdev=5889.60 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 2638], 5.00th=[ 3687], 10.00th=[ 4621], 20.00th=[ 5342], 00:38:40.357 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6718], 00:38:40.357 | 70.00th=[ 7177], 80.00th=[ 7963], 90.00th=[ 8979], 95.00th=[10945], 00:38:40.357 | 99.00th=[46924], 99.50th=[48497], 99.90th=[49021], 99.95th=[52167], 00:38:40.357 | 99.99th=[52167] 00:38:40.357 write: IOPS=9808, BW=38.3MiB/s (40.2MB/s)(40.0MiB/1044msec); 0 zone resets 00:38:40.357 slat (nsec): min=1634, max=6015.1k, avg=37507.86, stdev=302069.74 00:38:40.357 clat (usec): min=427, max=12167, avg=5673.45, stdev=1652.44 00:38:40.357 lat (usec): min=438, max=12194, avg=5710.96, stdev=1660.41 00:38:40.357 clat percentiles (usec): 00:38:40.357 | 1.00th=[ 1696], 5.00th=[ 3326], 10.00th=[ 3851], 20.00th=[ 4293], 00:38:40.357 | 30.00th=[ 4883], 40.00th=[ 5276], 50.00th=[ 5669], 60.00th=[ 5932], 00:38:40.357 | 70.00th=[ 6194], 80.00th=[ 6587], 90.00th=[ 7963], 95.00th=[ 8586], 00:38:40.357 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11994], 99.95th=[11994], 00:38:40.357 | 99.99th=[12125] 00:38:40.357 bw ( KiB/s): min=40960, max=40960, per=47.74%, avg=40960.00, stdev= 0.00, samples=2 00:38:40.358 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=2 00:38:40.358 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.06% 00:38:40.358 lat (msec) : 2=0.65%, 4=9.66%, 10=85.15%, 20=3.46%, 50=0.91% 00:38:40.358 lat (msec) : 100=0.05% 00:38:40.358 cpu : usr=7.96%, sys=11.70%, ctx=462, majf=0, minf=1 00:38:40.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:38:40.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:40.358 issued rwts: total=10130,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:40.358 job3: (groupid=0, jobs=1): err= 0: pid=985142: Mon Sep 30 23:07:07 2024 00:38:40.358 read: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1013msec) 00:38:40.358 slat (nsec): min=921, max=23221k, avg=117805.40, stdev=993028.11 00:38:40.358 clat (usec): min=3987, max=48308, avg=15636.74, stdev=7929.27 00:38:40.358 lat (usec): min=3994, max=48313, avg=15754.55, stdev=8008.83 00:38:40.358 clat percentiles (usec): 00:38:40.358 | 1.00th=[ 5342], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 8586], 00:38:40.358 | 30.00th=[ 9765], 40.00th=[11994], 50.00th=[13960], 60.00th=[16909], 00:38:40.358 | 70.00th=[19268], 80.00th=[21103], 90.00th=[24773], 95.00th=[31327], 00:38:40.358 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:38:40.358 | 99.99th=[48497] 00:38:40.358 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:38:40.358 slat (nsec): min=1685, max=18555k, avg=157998.15, stdev=1000590.46 00:38:40.358 clat (usec): min=903, max=68518, avg=22105.12, stdev=16264.18 00:38:40.358 lat (usec): min=913, max=68527, avg=22263.12, stdev=16390.04 00:38:40.358 clat percentiles (usec): 00:38:40.358 | 1.00th=[ 2671], 5.00th=[ 5932], 10.00th=[ 7242], 20.00th=[10028], 00:38:40.358 | 30.00th=[11731], 40.00th=[14091], 50.00th=[17171], 60.00th=[19268], 00:38:40.358 | 70.00th=[21365], 80.00th=[33424], 90.00th=[53216], 95.00th=[59507], 00:38:40.358 | 99.00th=[65274], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:38:40.358 | 99.99th=[68682] 00:38:40.358 bw ( KiB/s): min=11888, max=16384, per=16.48%, avg=14136.00, stdev=3179.15, samples=2 00:38:40.358 iops : min= 2972, max= 4096, avg=3534.00, stdev=794.79, samples=2 00:38:40.358 lat (usec) : 1000=0.04% 00:38:40.358 lat (msec) : 2=0.15%, 4=1.14%, 10=23.15%, 20=45.07%, 50=24.24% 00:38:40.358 lat (msec) : 100=6.21% 00:38:40.358 cpu : usr=3.36%, sys=3.46%, ctx=253, majf=0, minf=1 00:38:40.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:38:40.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:40.358 issued rwts: total=3150,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:40.358 00:38:40.358 Run status group 0 (all jobs): 00:38:40.358 READ: bw=80.0MiB/s (83.9MB/s), 9831KiB/s-37.9MiB/s (10.1MB/s-39.7MB/s), io=83.5MiB (87.6MB), run=1007-1044msec 00:38:40.358 WRITE: bw=83.8MiB/s (87.8MB/s), 9.93MiB/s-38.3MiB/s (10.4MB/s-40.2MB/s), io=87.5MiB (91.7MB), run=1007-1044msec 00:38:40.358 00:38:40.358 Disk stats (read/write): 00:38:40.358 nvme0n1: ios=4028/4096, merge=0/0, ticks=48907/47905, in_queue=96812, util=98.70% 00:38:40.358 nvme0n2: ios=2088/2111, merge=0/0, ticks=37781/55333, in_queue=93114, util=91.53% 00:38:40.358 nvme0n3: ios=10129/10240, merge=0/0, ticks=64523/53687, in_queue=118210, util=89.73% 00:38:40.358 nvme0n4: ios=2915/3072, merge=0/0, ticks=42709/54125, in_queue=96834, util=90.78% 00:38:40.358 23:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:38:40.358 [global] 00:38:40.358 thread=1 00:38:40.358 invalidate=1 00:38:40.358 rw=randwrite 00:38:40.358 time_based=1 00:38:40.358 runtime=1 00:38:40.358 ioengine=libaio 00:38:40.358 direct=1 00:38:40.358 bs=4096 00:38:40.358 iodepth=128 00:38:40.358 norandommap=0 00:38:40.358 numjobs=1 00:38:40.358 00:38:40.358 verify_dump=1 00:38:40.358 verify_backlog=512 00:38:40.358 verify_state_save=0 00:38:40.358 do_verify=1 00:38:40.358 verify=crc32c-intel 00:38:40.358 [job0] 00:38:40.358 filename=/dev/nvme0n1 00:38:40.358 [job1] 00:38:40.358 filename=/dev/nvme0n2 00:38:40.358 [job2] 00:38:40.358 filename=/dev/nvme0n3 00:38:40.358 [job3] 00:38:40.358 filename=/dev/nvme0n4 00:38:40.358 Could not set queue depth (nvme0n1) 00:38:40.358 Could not set queue depth (nvme0n2) 00:38:40.358 Could not set queue depth (nvme0n3) 00:38:40.358 Could not set queue depth (nvme0n4) 00:38:40.620 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:40.620 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:40.620 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:40.620 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:40.620 fio-3.35 00:38:40.620 Starting 4 threads 00:38:42.069 00:38:42.069 job0: (groupid=0, jobs=1): err= 0: pid=985653: Mon Sep 30 23:07:08 2024 00:38:42.069 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:38:42.069 slat (nsec): min=969, max=11246k, avg=110310.40, stdev=733625.93 00:38:42.069 clat (usec): min=1099, max=79291, avg=13372.91, stdev=10897.98 00:38:42.069 lat (usec): min=1126, max=79295, avg=13483.22, stdev=10991.38 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 1598], 5.00th=[ 2704], 10.00th=[ 4293], 20.00th=[ 7242], 00:38:42.069 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 9765], 60.00th=[12780], 00:38:42.069 | 70.00th=[16319], 80.00th=[17957], 90.00th=[21365], 95.00th=[28181], 00:38:42.069 | 99.00th=[72877], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:38:42.069 | 99.99th=[79168] 00:38:42.069 write: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(11.8MiB/1011msec); 0 zone resets 00:38:42.069 slat (nsec): min=1664, max=18068k, avg=225563.32, stdev=1077985.50 00:38:42.069 clat (usec): min=387, max=94284, avg=31150.10, stdev=26655.40 00:38:42.069 lat (usec): min=421, max=94290, avg=31375.66, stdev=26836.19 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 1795], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 7963], 00:38:42.069 | 30.00th=[11863], 40.00th=[16581], 50.00th=[20579], 60.00th=[28181], 00:38:42.069 | 70.00th=[37487], 80.00th=[53740], 90.00th=[80217], 95.00th=[85459], 00:38:42.069 | 99.00th=[91751], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:38:42.069 | 99.99th=[93848] 00:38:42.069 bw ( KiB/s): min= 9016, max=14064, per=13.61%, avg=11540.00, stdev=3569.48, samples=2 00:38:42.069 iops : min= 2254, max= 3516, avg=2885.00, stdev=892.37, samples=2 00:38:42.069 lat (usec) : 500=0.05%, 1000=0.09% 00:38:42.069 lat (msec) : 2=1.94%, 4=4.52%, 10=31.89%, 20=27.97%, 50=20.94% 00:38:42.069 lat (msec) : 100=12.60% 00:38:42.069 cpu : usr=2.28%, sys=2.87%, ctx=313, majf=0, minf=2 00:38:42.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:38:42.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:42.069 issued rwts: total=2560,3013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:42.069 job1: (groupid=0, jobs=1): err= 0: pid=985655: Mon Sep 30 23:07:08 2024 00:38:42.069 read: IOPS=7228, BW=28.2MiB/s (29.6MB/s)(28.7MiB/1015msec) 00:38:42.069 slat (nsec): min=929, max=12023k, avg=66537.44, stdev=520425.68 00:38:42.069 clat (usec): min=1579, max=43879, avg=9172.50, stdev=6630.62 00:38:42.069 lat (usec): min=1605, max=43887, avg=9239.04, stdev=6676.87 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 2900], 5.00th=[ 4015], 10.00th=[ 4817], 20.00th=[ 5800], 00:38:42.069 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7177], 00:38:42.069 | 70.00th=[ 7963], 80.00th=[ 9634], 90.00th=[19006], 95.00th=[26084], 00:38:42.069 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:38:42.069 | 99.99th=[43779] 00:38:42.069 write: IOPS=7566, BW=29.6MiB/s (31.0MB/s)(30.0MiB/1015msec); 0 zone resets 00:38:42.069 slat (nsec): min=1567, max=13304k, avg=57753.41, stdev=455828.17 00:38:42.069 clat (usec): min=901, max=38321, avg=7974.13, stdev=4305.08 00:38:42.069 lat (usec): min=911, max=38352, avg=8031.88, stdev=4336.33 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 2212], 5.00th=[ 3687], 10.00th=[ 4817], 20.00th=[ 5800], 00:38:42.069 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6915], 00:38:42.069 | 70.00th=[ 7242], 80.00th=[ 9110], 90.00th=[13042], 95.00th=[17433], 00:38:42.069 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26084], 99.95th=[27132], 00:38:42.069 | 99.99th=[38536] 00:38:42.069 bw ( KiB/s): min=29336, max=32104, per=36.24%, avg=30720.00, stdev=1957.27, samples=2 00:38:42.069 iops : min= 7334, max= 8026, avg=7680.00, stdev=489.32, samples=2 00:38:42.069 lat (usec) : 1000=0.02% 00:38:42.069 lat (msec) : 2=0.52%, 4=5.46%, 10=75.99%, 20=11.93%, 50=6.08% 00:38:42.069 cpu : usr=4.34%, sys=6.51%, ctx=579, majf=0, minf=1 00:38:42.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:42.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:42.069 issued rwts: total=7337,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:42.069 job2: (groupid=0, jobs=1): err= 0: pid=985657: Mon Sep 30 23:07:08 2024 00:38:42.069 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:38:42.069 slat (nsec): min=926, max=18915k, avg=122298.37, stdev=841377.45 00:38:42.069 clat (usec): min=4733, max=48550, avg=15449.38, stdev=8842.29 00:38:42.069 lat (usec): min=4735, max=48557, avg=15571.68, stdev=8905.82 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8586], 00:38:42.069 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[14091], 00:38:42.069 | 70.00th=[17695], 80.00th=[22938], 90.00th=[28967], 95.00th=[34866], 00:38:42.069 | 99.00th=[40633], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:38:42.069 | 99.99th=[48497] 00:38:42.069 write: IOPS=4122, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec); 0 zone resets 00:38:42.069 slat (nsec): min=1560, max=16372k, avg=113153.78, stdev=892802.84 00:38:42.069 clat (usec): min=1234, max=46036, avg=15553.95, stdev=6927.03 00:38:42.069 lat (usec): min=1246, max=46043, avg=15667.10, stdev=7011.02 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 4621], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 8979], 00:38:42.069 | 30.00th=[10159], 40.00th=[12387], 50.00th=[14615], 60.00th=[17957], 00:38:42.069 | 70.00th=[19268], 80.00th=[21890], 90.00th=[25560], 95.00th=[26870], 00:38:42.069 | 99.00th=[30540], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:38:42.069 | 99.99th=[45876] 00:38:42.069 bw ( KiB/s): min=10768, max=22000, per=19.33%, avg=16384.00, stdev=7942.22, samples=2 00:38:42.069 iops : min= 2692, max= 5500, avg=4096.00, stdev=1985.56, samples=2 00:38:42.069 lat (msec) : 2=0.02%, 4=0.10%, 10=27.74%, 20=46.91%, 50=25.23% 00:38:42.069 cpu : usr=3.08%, sys=4.27%, ctx=298, majf=0, minf=2 00:38:42.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:38:42.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:42.069 issued rwts: total=4096,4160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:42.069 job3: (groupid=0, jobs=1): err= 0: pid=985658: Mon Sep 30 23:07:08 2024 00:38:42.069 read: IOPS=6556, BW=25.6MiB/s (26.9MB/s)(25.7MiB/1005msec) 00:38:42.069 slat (nsec): min=928, max=10012k, avg=72110.00, stdev=571672.54 00:38:42.069 clat (usec): min=1894, max=29659, avg=10398.10, stdev=3514.12 00:38:42.069 lat (usec): min=1919, max=29661, avg=10470.21, stdev=3544.75 00:38:42.069 clat percentiles (usec): 00:38:42.069 | 1.00th=[ 3785], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7767], 00:38:42.069 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:38:42.069 | 70.00th=[11600], 80.00th=[13173], 90.00th=[14877], 95.00th=[17171], 00:38:42.069 | 99.00th=[22152], 99.50th=[22414], 99.90th=[24249], 99.95th=[24249], 00:38:42.069 | 99.99th=[29754] 00:38:42.069 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:38:42.069 slat (nsec): min=1558, max=10181k, avg=69605.05, stdev=553109.28 00:38:42.069 clat (usec): min=2012, max=22654, avg=8867.47, stdev=3172.91 00:38:42.069 lat (usec): min=2020, max=22676, avg=8937.07, stdev=3206.84 00:38:42.070 clat percentiles (usec): 00:38:42.070 | 1.00th=[ 3458], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6194], 00:38:42.070 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 8225], 60.00th=[ 9241], 00:38:42.070 | 70.00th=[10421], 80.00th=[11731], 90.00th=[12780], 95.00th=[14091], 00:38:42.070 | 99.00th=[18220], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:38:42.070 | 99.99th=[22676] 00:38:42.070 bw ( KiB/s): min=24584, max=28664, per=31.41%, avg=26624.00, stdev=2885.00, samples=2 00:38:42.070 iops : min= 6146, max= 7166, avg=6656.00, stdev=721.25, samples=2 00:38:42.070 lat (msec) : 2=0.14%, 4=1.43%, 10=57.67%, 20=39.68%, 50=1.08% 00:38:42.070 cpu : usr=3.98%, sys=7.77%, ctx=334, majf=0, minf=2 00:38:42.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:42.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:42.070 issued rwts: total=6589,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:42.070 00:38:42.070 Run status group 0 (all jobs): 00:38:42.070 READ: bw=79.2MiB/s (83.1MB/s), 9.89MiB/s-28.2MiB/s (10.4MB/s-29.6MB/s), io=80.4MiB (84.3MB), run=1005-1015msec 00:38:42.070 WRITE: bw=82.8MiB/s (86.8MB/s), 11.6MiB/s-29.6MiB/s (12.2MB/s-31.0MB/s), io=84.0MiB (88.1MB), run=1005-1015msec 00:38:42.070 00:38:42.070 Disk stats (read/write): 00:38:42.070 nvme0n1: ios=2593/2630, merge=0/0, ticks=30039/58396, in_queue=88435, util=97.60% 00:38:42.070 nvme0n2: ios=5824/6144, merge=0/0, ticks=38853/31854, in_queue=70707, util=96.53% 00:38:42.070 nvme0n3: ios=3584/3626, merge=0/0, ticks=24140/23696, in_queue=47836, util=87.54% 00:38:42.070 nvme0n4: ios=5273/5632, merge=0/0, ticks=47720/41741, in_queue=89461, util=92.19% 00:38:42.070 23:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:38:42.070 23:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=985746 00:38:42.070 23:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:38:42.070 23:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:38:42.070 [global] 00:38:42.070 thread=1 00:38:42.070 invalidate=1 00:38:42.070 rw=read 00:38:42.070 time_based=1 00:38:42.070 runtime=10 00:38:42.070 ioengine=libaio 00:38:42.070 direct=1 00:38:42.070 bs=4096 00:38:42.070 iodepth=1 00:38:42.070 norandommap=1 00:38:42.070 numjobs=1 00:38:42.070 00:38:42.070 [job0] 00:38:42.070 filename=/dev/nvme0n1 00:38:42.070 [job1] 00:38:42.070 filename=/dev/nvme0n2 00:38:42.070 [job2] 00:38:42.070 filename=/dev/nvme0n3 00:38:42.070 [job3] 00:38:42.070 filename=/dev/nvme0n4 00:38:42.070 Could not set queue depth (nvme0n1) 00:38:42.070 Could not set queue depth (nvme0n2) 00:38:42.070 Could not set queue depth (nvme0n3) 00:38:42.070 Could not set queue depth (nvme0n4) 00:38:42.332 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:42.332 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:42.332 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:42.332 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:42.332 fio-3.35 00:38:42.332 Starting 4 threads 00:38:44.876 23:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:38:45.137 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11837440, buflen=4096 00:38:45.137 fio: pid=986134, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:45.137 23:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:38:45.137 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12775424, buflen=4096 00:38:45.137 fio: pid=986128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:45.137 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:45.137 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:38:45.398 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4505600, buflen=4096 00:38:45.398 fio: pid=986097, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:45.398 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:45.398 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:38:45.659 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:45.659 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:38:45.659 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6918144, buflen=4096 00:38:45.659 fio: pid=986110, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:45.659 00:38:45.659 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=986097: Mon Sep 30 23:07:12 2024 00:38:45.659 read: IOPS=370, BW=1482KiB/s (1518kB/s)(4400KiB/2968msec) 00:38:45.660 slat (usec): min=6, max=15696, avg=39.01, stdev=472.33 00:38:45.660 clat (usec): min=284, max=42063, avg=2632.55, stdev=7781.31 00:38:45.660 lat (usec): min=291, max=57030, avg=2671.58, stdev=7866.19 00:38:45.660 clat percentiles (usec): 00:38:45.660 | 1.00th=[ 594], 5.00th=[ 807], 10.00th=[ 889], 20.00th=[ 971], 00:38:45.660 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:38:45.660 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1237], 95.00th=[ 1401], 00:38:45.660 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:45.660 | 99.99th=[42206] 00:38:45.660 bw ( KiB/s): min= 96, max= 3552, per=15.01%, avg=1665.60, stdev=1611.73, samples=5 00:38:45.660 iops : min= 24, max= 888, avg=416.40, stdev=402.93, samples=5 00:38:45.660 lat (usec) : 500=0.45%, 750=2.36%, 1000=21.34% 00:38:45.660 lat (msec) : 2=71.84%, 50=3.91% 00:38:45.660 cpu : usr=0.44%, sys=1.01%, ctx=1103, majf=0, minf=1 00:38:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.660 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=986110: Mon Sep 30 23:07:12 2024 00:38:45.660 read: IOPS=532, BW=2130KiB/s (2181kB/s)(6756KiB/3172msec) 00:38:45.660 slat (usec): min=6, max=12522, avg=28.21, stdev=304.24 00:38:45.660 clat (usec): min=245, max=42046, avg=1831.58, stdev=5955.34 00:38:45.660 lat (usec): min=252, max=42071, avg=1859.80, stdev=5961.86 00:38:45.660 clat percentiles (usec): 00:38:45.660 | 1.00th=[ 486], 5.00th=[ 668], 10.00th=[ 750], 20.00th=[ 832], 00:38:45.660 | 30.00th=[ 881], 40.00th=[ 930], 50.00th=[ 955], 60.00th=[ 988], 00:38:45.660 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1139], 00:38:45.660 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:38:45.660 | 99.99th=[42206] 00:38:45.660 bw ( KiB/s): min= 96, max= 4064, per=19.93%, avg=2211.67, stdev=1547.28, samples=6 00:38:45.660 iops : min= 24, max= 1016, avg=552.83, stdev=386.87, samples=6 00:38:45.660 lat (usec) : 250=0.06%, 500=1.01%, 750=9.17%, 1000=56.33% 00:38:45.660 lat (msec) : 2=31.12%, 50=2.25% 00:38:45.660 cpu : usr=0.38%, sys=1.39%, ctx=1694, majf=0, minf=2 00:38:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 issued rwts: total=1690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.660 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=986128: Mon Sep 30 23:07:12 2024 00:38:45.660 read: IOPS=1119, BW=4475KiB/s (4582kB/s)(12.2MiB/2788msec) 00:38:45.660 slat (nsec): min=7135, max=61343, avg=27057.40, stdev=4592.79 00:38:45.660 clat (usec): min=218, max=4391, avg=852.97, stdev=205.37 00:38:45.660 lat (usec): min=225, max=4419, avg=880.02, stdev=205.69 00:38:45.660 clat percentiles (usec): 00:38:45.660 | 1.00th=[ 457], 5.00th=[ 545], 10.00th=[ 611], 20.00th=[ 709], 00:38:45.660 | 30.00th=[ 775], 40.00th=[ 832], 50.00th=[ 873], 60.00th=[ 906], 00:38:45.660 | 70.00th=[ 947], 80.00th=[ 979], 90.00th=[ 1045], 95.00th=[ 1106], 00:38:45.660 | 99.00th=[ 1205], 99.50th=[ 1254], 99.90th=[ 3916], 99.95th=[ 4293], 00:38:45.660 | 99.99th=[ 4424] 00:38:45.660 bw ( KiB/s): min= 4200, max= 4720, per=40.53%, avg=4497.60, stdev=229.60, samples=5 00:38:45.660 iops : min= 1050, max= 1180, avg=1124.40, stdev=57.40, samples=5 00:38:45.660 lat (usec) : 250=0.06%, 500=2.21%, 750=24.13%, 1000=57.21% 00:38:45.660 lat (msec) : 2=16.22%, 4=0.03%, 10=0.10% 00:38:45.660 cpu : usr=1.26%, sys=3.44%, ctx=3124, majf=0, minf=2 00:38:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 issued rwts: total=3120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.660 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=986134: Mon Sep 30 23:07:12 2024 00:38:45.660 read: IOPS=1111, BW=4446KiB/s (4553kB/s)(11.3MiB/2600msec) 00:38:45.660 slat (nsec): min=6463, max=88948, avg=26752.10, stdev=4908.76 00:38:45.660 clat (usec): min=208, max=1969, avg=858.90, stdev=184.28 00:38:45.660 lat (usec): min=232, max=1995, avg=885.65, stdev=184.46 00:38:45.660 clat percentiles (usec): 00:38:45.660 | 1.00th=[ 392], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 693], 00:38:45.660 | 30.00th=[ 750], 40.00th=[ 816], 50.00th=[ 873], 60.00th=[ 930], 00:38:45.660 | 70.00th=[ 971], 80.00th=[ 1020], 90.00th=[ 1090], 95.00th=[ 1139], 00:38:45.660 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1352], 00:38:45.660 | 99.99th=[ 1975] 00:38:45.660 bw ( KiB/s): min= 4176, max= 4888, per=40.45%, avg=4488.00, stdev=265.39, samples=5 00:38:45.660 iops : min= 1044, max= 1222, avg=1122.00, stdev=66.35, samples=5 00:38:45.660 lat (usec) : 250=0.07%, 500=3.11%, 750=27.05%, 1000=45.69% 00:38:45.660 lat (msec) : 2=24.04% 00:38:45.660 cpu : usr=1.96%, sys=4.42%, ctx=2894, majf=0, minf=2 00:38:45.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.660 issued rwts: total=2891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.660 00:38:45.660 Run status group 0 (all jobs): 00:38:45.660 READ: bw=10.8MiB/s (11.4MB/s), 1482KiB/s-4475KiB/s (1518kB/s-4582kB/s), io=34.4MiB (36.0MB), run=2600-3172msec 00:38:45.660 00:38:45.660 Disk stats (read/write): 00:38:45.660 nvme0n1: ios=1049/0, merge=0/0, ticks=2748/0, in_queue=2748, util=94.29% 00:38:45.660 nvme0n2: ios=1687/0, merge=0/0, ticks=3001/0, in_queue=3001, util=95.29% 00:38:45.660 nvme0n3: ios=2945/0, merge=0/0, ticks=3263/0, in_queue=3263, util=99.52% 00:38:45.660 nvme0n4: ios=2890/0, merge=0/0, ticks=2216/0, in_queue=2216, util=96.39% 00:38:45.660 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:45.660 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:38:45.921 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:45.921 23:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:38:46.181 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:46.181 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 985746 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:46.441 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:46.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:46.701 nvmf hotplug test: fio failed as expected 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:46.701 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:46.701 rmmod nvme_tcp 00:38:46.962 rmmod nvme_fabrics 00:38:46.962 rmmod nvme_keyring 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 982511 ']' 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 982511 ']' 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982511' 00:38:46.962 killing process with pid 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 982511 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:46.962 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.222 23:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:49.133 00:38:49.133 real 0m28.491s 00:38:49.133 user 2m14.716s 00:38:49.133 sys 0m12.660s 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:49.133 ************************************ 00:38:49.133 END TEST nvmf_fio_target 00:38:49.133 ************************************ 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:49.133 ************************************ 00:38:49.133 START TEST nvmf_bdevio 00:38:49.133 ************************************ 00:38:49.133 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:49.395 * Looking for test storage... 00:38:49.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.395 --rc genhtml_branch_coverage=1 00:38:49.395 --rc genhtml_function_coverage=1 00:38:49.395 --rc genhtml_legend=1 00:38:49.395 --rc geninfo_all_blocks=1 00:38:49.395 --rc geninfo_unexecuted_blocks=1 00:38:49.395 00:38:49.395 ' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.395 --rc genhtml_branch_coverage=1 00:38:49.395 --rc genhtml_function_coverage=1 00:38:49.395 --rc genhtml_legend=1 00:38:49.395 --rc geninfo_all_blocks=1 00:38:49.395 --rc geninfo_unexecuted_blocks=1 00:38:49.395 00:38:49.395 ' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.395 --rc genhtml_branch_coverage=1 00:38:49.395 --rc genhtml_function_coverage=1 00:38:49.395 --rc genhtml_legend=1 00:38:49.395 --rc geninfo_all_blocks=1 00:38:49.395 --rc geninfo_unexecuted_blocks=1 00:38:49.395 00:38:49.395 ' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.395 --rc genhtml_branch_coverage=1 00:38:49.395 --rc genhtml_function_coverage=1 00:38:49.395 --rc genhtml_legend=1 00:38:49.395 --rc geninfo_all_blocks=1 00:38:49.395 --rc geninfo_unexecuted_blocks=1 00:38:49.395 00:38:49.395 ' 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.395 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:38:49.396 23:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:57.536 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:57.536 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:57.536 Found net devices under 0000:31:00.0: cvl_0_0 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:57.536 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:57.537 Found net devices under 0000:31:00.1: cvl_0_1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:38:57.537 00:38:57.537 --- 10.0.0.2 ping statistics --- 00:38:57.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.537 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:38:57.537 23:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:38:57.537 00:38:57.537 --- 10.0.0.1 ping statistics --- 00:38:57.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.537 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=991284 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 991284 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 991284 ']' 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:57.537 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:57.537 [2024-09-30 23:07:24.116420] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.537 [2024-09-30 23:07:24.117561] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:38:57.537 [2024-09-30 23:07:24.117612] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.537 [2024-09-30 23:07:24.208637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:57.537 [2024-09-30 23:07:24.299858] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.537 [2024-09-30 23:07:24.299928] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.537 [2024-09-30 23:07:24.299937] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.537 [2024-09-30 23:07:24.299945] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.537 [2024-09-30 23:07:24.299951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.537 [2024-09-30 23:07:24.300114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:57.537 [2024-09-30 23:07:24.300373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:57.537 [2024-09-30 23:07:24.300531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:57.537 [2024-09-30 23:07:24.300533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:57.537 [2024-09-30 23:07:24.395743] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.537 [2024-09-30 23:07:24.396966] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.537 [2024-09-30 23:07:24.396981] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:57.537 [2024-09-30 23:07:24.397517] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:57.537 [2024-09-30 23:07:24.397555] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.110 23:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.110 [2024-09-30 23:07:24.997519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.111 Malloc0 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:58.111 [2024-09-30 23:07:25.081877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:58.111 { 00:38:58.111 "params": { 00:38:58.111 "name": "Nvme$subsystem", 00:38:58.111 "trtype": "$TEST_TRANSPORT", 00:38:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.111 "adrfam": "ipv4", 00:38:58.111 "trsvcid": "$NVMF_PORT", 00:38:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.111 "hdgst": ${hdgst:-false}, 00:38:58.111 "ddgst": ${ddgst:-false} 00:38:58.111 }, 00:38:58.111 "method": "bdev_nvme_attach_controller" 00:38:58.111 } 00:38:58.111 EOF 00:38:58.111 )") 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:38:58.111 23:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:58.111 "params": { 00:38:58.111 "name": "Nvme1", 00:38:58.111 "trtype": "tcp", 00:38:58.111 "traddr": "10.0.0.2", 00:38:58.111 "adrfam": "ipv4", 00:38:58.111 "trsvcid": "4420", 00:38:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.111 "hdgst": false, 00:38:58.111 "ddgst": false 00:38:58.111 }, 00:38:58.111 "method": "bdev_nvme_attach_controller" 00:38:58.111 }' 00:38:58.373 [2024-09-30 23:07:25.141289] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:38:58.373 [2024-09-30 23:07:25.141354] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991335 ] 00:38:58.373 [2024-09-30 23:07:25.223224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:58.373 [2024-09-30 23:07:25.321713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.373 [2024-09-30 23:07:25.321885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.373 [2024-09-30 23:07:25.321885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:58.633 I/O targets: 00:38:58.633 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:58.633 00:38:58.633 00:38:58.633 CUnit - A unit testing framework for C - Version 2.1-3 00:38:58.633 http://cunit.sourceforge.net/ 00:38:58.633 00:38:58.633 00:38:58.633 Suite: bdevio tests on: Nvme1n1 00:38:58.633 Test: blockdev write read block ...passed 00:38:58.633 Test: blockdev write zeroes read block ...passed 00:38:58.633 Test: blockdev write zeroes read no split ...passed 00:38:58.633 Test: blockdev write zeroes read split ...passed 00:38:58.895 Test: blockdev write zeroes read split partial ...passed 00:38:58.895 Test: blockdev reset ...[2024-09-30 23:07:25.653065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.895 [2024-09-30 23:07:25.653169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2d1d0 (9): Bad file descriptor 00:38:58.895 [2024-09-30 23:07:25.660821] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:58.895 passed 00:38:58.895 Test: blockdev write read 8 blocks ...passed 00:38:58.895 Test: blockdev write read size > 128k ...passed 00:38:58.895 Test: blockdev write read invalid size ...passed 00:38:58.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:58.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:58.895 Test: blockdev write read max offset ...passed 00:38:58.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:58.895 Test: blockdev writev readv 8 blocks ...passed 00:38:58.895 Test: blockdev writev readv 30 x 1block ...passed 00:38:58.895 Test: blockdev writev readv block ...passed 00:38:58.895 Test: blockdev writev readv size > 128k ...passed 00:38:58.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:58.895 Test: blockdev comparev and writev ...[2024-09-30 23:07:25.888590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.888636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.888653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.888663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.889340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.889360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.889374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.889382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.890015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.890027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.890041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.890048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.890671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.890683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:58.895 [2024-09-30 23:07:25.890702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:58.895 [2024-09-30 23:07:25.890715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:59.156 passed 00:38:59.156 Test: blockdev nvme passthru rw ...passed 00:38:59.156 Test: blockdev nvme passthru vendor specific ...[2024-09-30 23:07:25.975883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:59.156 [2024-09-30 23:07:25.975904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:59.156 [2024-09-30 23:07:25.976350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:59.156 [2024-09-30 23:07:25.976362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:59.156 [2024-09-30 23:07:25.976763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:59.156 [2024-09-30 23:07:25.976774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:59.156 [2024-09-30 23:07:25.977177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:59.156 [2024-09-30 23:07:25.977189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:59.156 passed 00:38:59.156 Test: blockdev nvme admin passthru ...passed 00:38:59.156 Test: blockdev copy ...passed 00:38:59.156 00:38:59.156 Run Summary: Type Total Ran Passed Failed Inactive 00:38:59.156 suites 1 1 n/a 0 0 00:38:59.156 tests 23 23 23 0 0 00:38:59.156 asserts 152 152 152 0 n/a 00:38:59.156 00:38:59.156 Elapsed time = 1.031 seconds 00:38:59.417 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:59.417 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.417 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:59.418 rmmod nvme_tcp 00:38:59.418 rmmod nvme_fabrics 00:38:59.418 rmmod nvme_keyring 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 991284 ']' 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 991284 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 991284 ']' 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 991284 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 991284 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 991284' 00:38:59.418 killing process with pid 991284 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 991284 00:38:59.418 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 991284 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.679 23:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:02.232 00:39:02.232 real 0m12.496s 00:39:02.232 user 0m9.657s 00:39:02.232 sys 0m6.627s 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:02.232 ************************************ 00:39:02.232 END TEST nvmf_bdevio 00:39:02.232 ************************************ 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:02.232 00:39:02.232 real 5m1.560s 00:39:02.232 user 10m16.237s 00:39:02.232 sys 2m8.210s 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:02.232 23:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:02.232 ************************************ 00:39:02.232 END TEST nvmf_target_core_interrupt_mode 00:39:02.232 ************************************ 00:39:02.232 23:07:28 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:02.232 23:07:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:02.232 23:07:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:02.232 23:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:02.232 ************************************ 00:39:02.232 START TEST nvmf_interrupt 00:39:02.232 ************************************ 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:02.232 * Looking for test storage... 00:39:02.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:02.232 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:02.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.233 --rc genhtml_branch_coverage=1 00:39:02.233 --rc genhtml_function_coverage=1 00:39:02.233 --rc genhtml_legend=1 00:39:02.233 --rc geninfo_all_blocks=1 00:39:02.233 --rc geninfo_unexecuted_blocks=1 00:39:02.233 00:39:02.233 ' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:02.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.233 --rc genhtml_branch_coverage=1 00:39:02.233 --rc genhtml_function_coverage=1 00:39:02.233 --rc genhtml_legend=1 00:39:02.233 --rc geninfo_all_blocks=1 00:39:02.233 --rc geninfo_unexecuted_blocks=1 00:39:02.233 00:39:02.233 ' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:02.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.233 --rc genhtml_branch_coverage=1 00:39:02.233 --rc genhtml_function_coverage=1 00:39:02.233 --rc genhtml_legend=1 00:39:02.233 --rc geninfo_all_blocks=1 00:39:02.233 --rc geninfo_unexecuted_blocks=1 00:39:02.233 00:39:02.233 ' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:02.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.233 --rc genhtml_branch_coverage=1 00:39:02.233 --rc genhtml_function_coverage=1 00:39:02.233 --rc genhtml_legend=1 00:39:02.233 --rc geninfo_all_blocks=1 00:39:02.233 --rc geninfo_unexecuted_blocks=1 00:39:02.233 00:39:02.233 ' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:02.233 23:07:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:02.233 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:02.234 23:07:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:02.234 23:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:10.375 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:10.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:10.375 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:10.376 Found net devices under 0000:31:00.0: cvl_0_0 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:10.376 Found net devices under 0000:31:00.1: cvl_0_1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:10.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:39:10.376 00:39:10.376 --- 10.0.0.2 ping statistics --- 00:39:10.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.376 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:39:10.376 00:39:10.376 --- 10.0.0.1 ping statistics --- 00:39:10.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.376 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=995956 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 995956 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 995956 ']' 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:10.376 23:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.376 [2024-09-30 23:07:36.737486] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:10.376 [2024-09-30 23:07:36.738615] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:39:10.376 [2024-09-30 23:07:36.738670] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:10.376 [2024-09-30 23:07:36.829767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:10.376 [2024-09-30 23:07:36.926033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:10.376 [2024-09-30 23:07:36.926094] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:10.376 [2024-09-30 23:07:36.926110] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:10.376 [2024-09-30 23:07:36.926118] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:10.376 [2024-09-30 23:07:36.926124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:10.376 [2024-09-30 23:07:36.926232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.376 [2024-09-30 23:07:36.926234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.376 [2024-09-30 23:07:37.002157] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:10.376 [2024-09-30 23:07:37.002707] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:10.376 [2024-09-30 23:07:37.003054] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:10.638 5000+0 records in 00:39:10.638 5000+0 records out 00:39:10.638 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188007 s, 545 MB/s 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.638 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.898 AIO0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.898 [2024-09-30 23:07:37.675294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:10.898 [2024-09-30 23:07:37.723882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 995956 0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 0 idle 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 995956 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0' 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 995956 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:10.898 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 995956 1 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 1 idle 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:11.159 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:11.160 23:07:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 996000 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 996000 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=996156 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 995956 0 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 995956 0 busy 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:11.160 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 995956 root 20 0 128.2g 44928 32256 R 62.5 0.0 0:00.45 reactor_0' 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 995956 root 20 0 128.2g 44928 32256 R 62.5 0.0 0:00.45 reactor_0 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 995956 1 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 995956 1 busy 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:11.421 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 996000 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1' 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 996000 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:11.682 23:07:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 996156 00:39:21.798 Initializing NVMe Controllers 00:39:21.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:21.798 Controller IO queue size 256, less than required. 00:39:21.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:21.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:21.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:21.798 Initialization complete. Launching workers. 00:39:21.798 ======================================================== 00:39:21.798 Latency(us) 00:39:21.798 Device Information : IOPS MiB/s Average min max 00:39:21.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18077.99 70.62 14166.51 3919.25 51344.05 00:39:21.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19235.79 75.14 13309.73 7781.48 30593.79 00:39:21.798 ======================================================== 00:39:21.798 Total : 37313.78 145.76 13724.83 3919.25 51344.05 00:39:21.798 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 995956 0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 0 idle 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 995956 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.35 reactor_0' 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 995956 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.35 reactor_0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 995956 1 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 1 idle 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 996000 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:39:21.798 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 996000 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:21.799 23:07:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:22.369 23:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:22.369 23:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:39:22.369 23:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:22.369 23:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:22.369 23:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 995956 0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 0 idle 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 995956 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.72 reactor_0' 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 995956 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.72 reactor_0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:24.910 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 995956 1 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 995956 1 idle 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=995956 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 995956 -w 256 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 996000 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 996000 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:24.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:24.911 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:24.911 rmmod nvme_tcp 00:39:25.171 rmmod nvme_fabrics 00:39:25.171 rmmod nvme_keyring 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 995956 ']' 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 995956 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 995956 ']' 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 995956 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:25.171 23:07:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995956 00:39:25.171 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:25.171 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:25.171 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995956' 00:39:25.171 killing process with pid 995956 00:39:25.171 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 995956 00:39:25.171 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 995956 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.432 23:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:27.344 23:07:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:27.344 00:39:27.344 real 0m25.520s 00:39:27.344 user 0m40.294s 00:39:27.344 sys 0m9.974s 00:39:27.344 23:07:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:27.344 23:07:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:27.344 ************************************ 00:39:27.344 END TEST nvmf_interrupt 00:39:27.344 ************************************ 00:39:27.344 00:39:27.344 real 30m6.857s 00:39:27.344 user 61m9.178s 00:39:27.344 sys 10m20.350s 00:39:27.344 23:07:54 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:27.344 23:07:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:27.344 ************************************ 00:39:27.344 END TEST nvmf_tcp 00:39:27.344 ************************************ 00:39:27.605 23:07:54 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:27.605 23:07:54 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:27.605 23:07:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:27.605 23:07:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:27.605 23:07:54 -- common/autotest_common.sh@10 -- # set +x 00:39:27.605 ************************************ 00:39:27.605 START TEST spdkcli_nvmf_tcp 00:39:27.605 ************************************ 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:27.605 * Looking for test storage... 00:39:27.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.605 --rc genhtml_branch_coverage=1 00:39:27.605 --rc genhtml_function_coverage=1 00:39:27.605 --rc genhtml_legend=1 00:39:27.605 --rc geninfo_all_blocks=1 00:39:27.605 --rc geninfo_unexecuted_blocks=1 00:39:27.605 00:39:27.605 ' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.605 --rc genhtml_branch_coverage=1 00:39:27.605 --rc genhtml_function_coverage=1 00:39:27.605 --rc genhtml_legend=1 00:39:27.605 --rc geninfo_all_blocks=1 00:39:27.605 --rc geninfo_unexecuted_blocks=1 00:39:27.605 00:39:27.605 ' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.605 --rc genhtml_branch_coverage=1 00:39:27.605 --rc genhtml_function_coverage=1 00:39:27.605 --rc genhtml_legend=1 00:39:27.605 --rc geninfo_all_blocks=1 00:39:27.605 --rc geninfo_unexecuted_blocks=1 00:39:27.605 00:39:27.605 ' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.605 --rc genhtml_branch_coverage=1 00:39:27.605 --rc genhtml_function_coverage=1 00:39:27.605 --rc genhtml_legend=1 00:39:27.605 --rc geninfo_all_blocks=1 00:39:27.605 --rc geninfo_unexecuted_blocks=1 00:39:27.605 00:39:27.605 ' 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:27.605 23:07:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:27.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:27.867 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=999474 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 999474 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 999474 ']' 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:27.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:27.868 23:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:27.868 [2024-09-30 23:07:54.719083] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:39:27.868 [2024-09-30 23:07:54.719142] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999474 ] 00:39:27.868 [2024-09-30 23:07:54.798661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:27.868 [2024-09-30 23:07:54.882568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.868 [2024-09-30 23:07:54.882571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:28.811 23:07:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:28.811 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:28.811 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:28.811 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:28.811 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:28.811 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:28.811 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:28.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:28.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:28.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:28.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:28.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:28.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:28.812 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:28.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:28.812 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:28.812 ' 00:39:31.359 [2024-09-30 23:07:58.308456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.743 [2024-09-30 23:07:59.668720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:35.289 [2024-09-30 23:08:02.191931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:37.838 [2024-09-30 23:08:04.414271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:39.226 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:39.226 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:39.226 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:39.226 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:39.226 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:39.226 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:39.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:39.227 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:39.227 23:08:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:39.799 23:08:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:39.799 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:39.799 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:39.799 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:39.799 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:39.799 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:39.799 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:39.799 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:39.799 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:39.799 ' 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:46.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:46.383 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:46.383 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:46.383 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 999474 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 999474 ']' 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 999474 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 999474 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 999474' 00:39:46.383 killing process with pid 999474 00:39:46.383 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 999474 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 999474 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 999474 ']' 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 999474 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 999474 ']' 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 999474 00:39:46.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (999474) - No such process 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 999474 is not found' 00:39:46.384 Process with pid 999474 is not found 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:46.384 00:39:46.384 real 0m18.182s 00:39:46.384 user 0m40.255s 00:39:46.384 sys 0m0.955s 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:46.384 23:08:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:46.384 ************************************ 00:39:46.384 END TEST spdkcli_nvmf_tcp 00:39:46.384 ************************************ 00:39:46.384 23:08:12 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:46.384 23:08:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:46.384 23:08:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:46.384 23:08:12 -- common/autotest_common.sh@10 -- # set +x 00:39:46.384 ************************************ 00:39:46.384 START TEST nvmf_identify_passthru 00:39:46.384 ************************************ 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:46.384 * Looking for test storage... 00:39:46.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:46.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.384 --rc genhtml_branch_coverage=1 00:39:46.384 --rc genhtml_function_coverage=1 00:39:46.384 --rc genhtml_legend=1 00:39:46.384 --rc geninfo_all_blocks=1 00:39:46.384 --rc geninfo_unexecuted_blocks=1 00:39:46.384 00:39:46.384 ' 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:46.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.384 --rc genhtml_branch_coverage=1 00:39:46.384 --rc genhtml_function_coverage=1 00:39:46.384 --rc genhtml_legend=1 00:39:46.384 --rc geninfo_all_blocks=1 00:39:46.384 --rc geninfo_unexecuted_blocks=1 00:39:46.384 00:39:46.384 ' 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:46.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.384 --rc genhtml_branch_coverage=1 00:39:46.384 --rc genhtml_function_coverage=1 00:39:46.384 --rc genhtml_legend=1 00:39:46.384 --rc geninfo_all_blocks=1 00:39:46.384 --rc geninfo_unexecuted_blocks=1 00:39:46.384 00:39:46.384 ' 00:39:46.384 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:46.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.384 --rc genhtml_branch_coverage=1 00:39:46.384 --rc genhtml_function_coverage=1 00:39:46.384 --rc genhtml_legend=1 00:39:46.384 --rc geninfo_all_blocks=1 00:39:46.384 --rc geninfo_unexecuted_blocks=1 00:39:46.384 00:39:46.384 ' 00:39:46.384 23:08:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.384 23:08:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.384 23:08:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.384 23:08:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.384 23:08:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.384 23:08:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:46.384 23:08:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:46.384 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:46.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:46.385 23:08:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.385 23:08:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:46.385 23:08:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.385 23:08:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.385 23:08:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.385 23:08:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.385 23:08:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.385 23:08:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.385 23:08:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:46.385 23:08:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.385 23:08:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.385 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:46.385 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:46.385 23:08:12 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:46.385 23:08:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:54.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:54.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:54.529 Found net devices under 0000:31:00.0: cvl_0_0 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:54.529 Found net devices under 0000:31:00.1: cvl_0_1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:54.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:54.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:39:54.529 00:39:54.529 --- 10.0.0.2 ping statistics --- 00:39:54.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.529 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:54.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:54.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:39:54.529 00:39:54.529 --- 10.0.0.1 ping statistics --- 00:39:54.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.529 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:54.529 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.530 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:54.530 23:08:20 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:39:54.530 23:08:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:54.530 23:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:54.530 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:39:54.530 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:54.530 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:54.530 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1006951 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:54.790 23:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1006951 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1006951 ']' 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:54.790 23:08:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.051 [2024-09-30 23:08:21.833911] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:39:55.051 [2024-09-30 23:08:21.833986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:55.051 [2024-09-30 23:08:21.922429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:55.051 [2024-09-30 23:08:22.021608] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:55.051 [2024-09-30 23:08:22.021674] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:55.051 [2024-09-30 23:08:22.021683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:55.051 [2024-09-30 23:08:22.021691] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:55.051 [2024-09-30 23:08:22.021697] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:55.051 [2024-09-30 23:08:22.021856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:55.051 [2024-09-30 23:08:22.022022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:55.051 [2024-09-30 23:08:22.022072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.051 [2024-09-30 23:08:22.022072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:39:55.994 23:08:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.994 INFO: Log level set to 20 00:39:55.994 INFO: Requests: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "method": "nvmf_set_config", 00:39:55.994 "id": 1, 00:39:55.994 "params": { 00:39:55.994 "admin_cmd_passthru": { 00:39:55.994 "identify_ctrlr": true 00:39:55.994 } 00:39:55.994 } 00:39:55.994 } 00:39:55.994 00:39:55.994 INFO: response: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "id": 1, 00:39:55.994 "result": true 00:39:55.994 } 00:39:55.994 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.994 23:08:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.994 INFO: Setting log level to 20 00:39:55.994 INFO: Setting log level to 20 00:39:55.994 INFO: Log level set to 20 00:39:55.994 INFO: Log level set to 20 00:39:55.994 INFO: Requests: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "method": "framework_start_init", 00:39:55.994 "id": 1 00:39:55.994 } 00:39:55.994 00:39:55.994 INFO: Requests: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "method": "framework_start_init", 00:39:55.994 "id": 1 00:39:55.994 } 00:39:55.994 00:39:55.994 [2024-09-30 23:08:22.765156] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:55.994 INFO: response: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "id": 1, 00:39:55.994 "result": true 00:39:55.994 } 00:39:55.994 00:39:55.994 INFO: response: 00:39:55.994 { 00:39:55.994 "jsonrpc": "2.0", 00:39:55.994 "id": 1, 00:39:55.994 "result": true 00:39:55.994 } 00:39:55.994 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.994 23:08:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.994 INFO: Setting log level to 40 00:39:55.994 INFO: Setting log level to 40 00:39:55.994 INFO: Setting log level to 40 00:39:55.994 [2024-09-30 23:08:22.778702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.994 23:08:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.994 23:08:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.994 23:08:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.256 Nvme0n1 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.256 [2024-09-30 23:08:23.169725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.256 [ 00:39:56.256 { 00:39:56.256 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:56.256 "subtype": "Discovery", 00:39:56.256 "listen_addresses": [], 00:39:56.256 "allow_any_host": true, 00:39:56.256 "hosts": [] 00:39:56.256 }, 00:39:56.256 { 00:39:56.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:56.256 "subtype": "NVMe", 00:39:56.256 "listen_addresses": [ 00:39:56.256 { 00:39:56.256 "trtype": "TCP", 00:39:56.256 "adrfam": "IPv4", 00:39:56.256 "traddr": "10.0.0.2", 00:39:56.256 "trsvcid": "4420" 00:39:56.256 } 00:39:56.256 ], 00:39:56.256 "allow_any_host": true, 00:39:56.256 "hosts": [], 00:39:56.256 "serial_number": "SPDK00000000000001", 00:39:56.256 "model_number": "SPDK bdev Controller", 00:39:56.256 "max_namespaces": 1, 00:39:56.256 "min_cntlid": 1, 00:39:56.256 "max_cntlid": 65519, 00:39:56.256 "namespaces": [ 00:39:56.256 { 00:39:56.256 "nsid": 1, 00:39:56.256 "bdev_name": "Nvme0n1", 00:39:56.256 "name": "Nvme0n1", 00:39:56.256 "nguid": "3634473052605494002538450000002B", 00:39:56.256 "uuid": "36344730-5260-5494-0025-38450000002b" 00:39:56.256 } 00:39:56.256 ] 00:39:56.256 } 00:39:56.256 ] 00:39:56.256 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:56.256 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:56.517 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:39:56.517 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:56.517 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:56.517 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:56.779 23:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:56.779 rmmod nvme_tcp 00:39:56.779 rmmod nvme_fabrics 00:39:56.779 rmmod nvme_keyring 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 1006951 ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 1006951 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1006951 ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1006951 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006951 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006951' 00:39:56.779 killing process with pid 1006951 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1006951 00:39:56.779 23:08:23 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1006951 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:39:57.041 23:08:23 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:39:57.041 23:08:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:57.041 23:08:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:57.041 23:08:24 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.041 23:08:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:57.041 23:08:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.587 23:08:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:59.587 00:39:59.587 real 0m13.405s 00:39:59.587 user 0m10.081s 00:39:59.587 sys 0m6.972s 00:39:59.587 23:08:26 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:59.587 23:08:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:59.587 ************************************ 00:39:59.587 END TEST nvmf_identify_passthru 00:39:59.587 ************************************ 00:39:59.587 23:08:26 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:59.587 23:08:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:59.587 23:08:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:59.587 23:08:26 -- common/autotest_common.sh@10 -- # set +x 00:39:59.587 ************************************ 00:39:59.587 START TEST nvmf_dif 00:39:59.587 ************************************ 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:59.587 * Looking for test storage... 00:39:59.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:59.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.587 --rc genhtml_branch_coverage=1 00:39:59.587 --rc genhtml_function_coverage=1 00:39:59.587 --rc genhtml_legend=1 00:39:59.587 --rc geninfo_all_blocks=1 00:39:59.587 --rc geninfo_unexecuted_blocks=1 00:39:59.587 00:39:59.587 ' 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:59.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.587 --rc genhtml_branch_coverage=1 00:39:59.587 --rc genhtml_function_coverage=1 00:39:59.587 --rc genhtml_legend=1 00:39:59.587 --rc geninfo_all_blocks=1 00:39:59.587 --rc geninfo_unexecuted_blocks=1 00:39:59.587 00:39:59.587 ' 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:59.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.587 --rc genhtml_branch_coverage=1 00:39:59.587 --rc genhtml_function_coverage=1 00:39:59.587 --rc genhtml_legend=1 00:39:59.587 --rc geninfo_all_blocks=1 00:39:59.587 --rc geninfo_unexecuted_blocks=1 00:39:59.587 00:39:59.587 ' 00:39:59.587 23:08:26 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:59.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.587 --rc genhtml_branch_coverage=1 00:39:59.587 --rc genhtml_function_coverage=1 00:39:59.587 --rc genhtml_legend=1 00:39:59.587 --rc geninfo_all_blocks=1 00:39:59.587 --rc geninfo_unexecuted_blocks=1 00:39:59.587 00:39:59.587 ' 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:59.587 23:08:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:59.587 23:08:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.587 23:08:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.587 23:08:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.587 23:08:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:59.587 23:08:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:59.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:59.587 23:08:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:59.587 23:08:26 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:59.588 23:08:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:59.588 23:08:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:59.588 23:08:26 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:59.588 23:08:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:07.728 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:07.728 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:07.728 23:08:33 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:07.729 Found net devices under 0000:31:00.0: cvl_0_0 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:07.729 Found net devices under 0000:31:00.1: cvl_0_1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.729 23:08:33 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:40:07.729 00:40:07.729 --- 10.0.0.2 ping statistics --- 00:40:07.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.729 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:40:07.729 23:08:34 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:40:07.729 00:40:07.729 --- 10.0.0.1 ping statistics --- 00:40:07.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.729 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:07.729 23:08:34 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.729 23:08:34 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:40:07.729 23:08:34 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:40:07.729 23:08:34 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:11.028 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:11.028 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:11.028 23:08:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:11.028 23:08:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=1013043 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 1013043 00:40:11.028 23:08:37 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1013043 ']' 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:11.028 23:08:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:11.289 [2024-09-30 23:08:38.051067] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:40:11.289 [2024-09-30 23:08:38.051129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:11.289 [2024-09-30 23:08:38.142908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.289 [2024-09-30 23:08:38.238614] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:11.289 [2024-09-30 23:08:38.238680] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:11.289 [2024-09-30 23:08:38.238688] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:11.289 [2024-09-30 23:08:38.238695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:11.289 [2024-09-30 23:08:38.238701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:11.289 [2024-09-30 23:08:38.238730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.894 23:08:38 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:11.894 23:08:38 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:11.894 23:08:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:11.894 23:08:38 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:11.894 23:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:12.166 23:08:38 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:12.166 23:08:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:12.166 23:08:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:12.166 [2024-09-30 23:08:38.932970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.166 23:08:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:12.166 23:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:12.166 ************************************ 00:40:12.166 START TEST fio_dif_1_default 00:40:12.166 ************************************ 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:12.166 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:12.167 bdev_null0 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.167 23:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:12.167 [2024-09-30 23:08:39.025440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:12.167 { 00:40:12.167 "params": { 00:40:12.167 "name": "Nvme$subsystem", 00:40:12.167 "trtype": "$TEST_TRANSPORT", 00:40:12.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:12.167 "adrfam": "ipv4", 00:40:12.167 "trsvcid": "$NVMF_PORT", 00:40:12.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:12.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:12.167 "hdgst": ${hdgst:-false}, 00:40:12.167 "ddgst": ${ddgst:-false} 00:40:12.167 }, 00:40:12.167 "method": "bdev_nvme_attach_controller" 00:40:12.167 } 00:40:12.167 EOF 00:40:12.167 )") 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:12.167 "params": { 00:40:12.167 "name": "Nvme0", 00:40:12.167 "trtype": "tcp", 00:40:12.167 "traddr": "10.0.0.2", 00:40:12.167 "adrfam": "ipv4", 00:40:12.167 "trsvcid": "4420", 00:40:12.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.167 "hdgst": false, 00:40:12.167 "ddgst": false 00:40:12.167 }, 00:40:12.167 "method": "bdev_nvme_attach_controller" 00:40:12.167 }' 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:12.167 23:08:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:12.734 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:12.734 fio-3.35 00:40:12.734 Starting 1 thread 00:40:24.969 00:40:24.969 filename0: (groupid=0, jobs=1): err= 0: pid=1013609: Mon Sep 30 23:08:50 2024 00:40:24.969 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10040msec) 00:40:24.969 slat (nsec): min=5396, max=36567, avg=6313.81, stdev=1874.76 00:40:24.969 clat (usec): min=40880, max=44124, avg=41126.43, stdev=382.93 00:40:24.969 lat (usec): min=40886, max=44160, avg=41132.74, stdev=384.03 00:40:24.969 clat percentiles (usec): 00:40:24.969 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:24.969 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:24.969 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:24.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:40:24.969 | 99.99th=[44303] 00:40:24.969 bw ( KiB/s): min= 352, max= 416, per=99.78%, avg=388.80, stdev=15.66, samples=20 00:40:24.969 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:40:24.969 lat (msec) : 50=100.00% 00:40:24.969 cpu : usr=93.73%, sys=6.02%, ctx=12, majf=0, minf=218 00:40:24.969 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.969 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.969 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:24.969 00:40:24.969 Run status group 0 (all jobs): 00:40:24.969 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10040-10040msec 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 00:40:24.969 real 0m11.263s 00:40:24.969 user 0m19.672s 00:40:24.969 sys 0m0.997s 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 ************************************ 00:40:24.969 END TEST fio_dif_1_default 00:40:24.969 ************************************ 00:40:24.969 23:08:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:24.969 23:08:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:24.969 23:08:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 ************************************ 00:40:24.969 START TEST fio_dif_1_multi_subsystems 00:40:24.969 ************************************ 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 bdev_null0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 [2024-09-30 23:08:50.371109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 bdev_null1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:24.969 { 00:40:24.969 "params": { 00:40:24.969 "name": "Nvme$subsystem", 00:40:24.969 "trtype": "$TEST_TRANSPORT", 00:40:24.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.969 "adrfam": "ipv4", 00:40:24.969 "trsvcid": "$NVMF_PORT", 00:40:24.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.969 "hdgst": ${hdgst:-false}, 00:40:24.969 "ddgst": ${ddgst:-false} 00:40:24.969 }, 00:40:24.969 "method": "bdev_nvme_attach_controller" 00:40:24.969 } 00:40:24.969 EOF 00:40:24.969 )") 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:24.969 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:24.970 { 00:40:24.970 "params": { 00:40:24.970 "name": "Nvme$subsystem", 00:40:24.970 "trtype": "$TEST_TRANSPORT", 00:40:24.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.970 "adrfam": "ipv4", 00:40:24.970 "trsvcid": "$NVMF_PORT", 00:40:24.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.970 "hdgst": ${hdgst:-false}, 00:40:24.970 "ddgst": ${ddgst:-false} 00:40:24.970 }, 00:40:24.970 "method": "bdev_nvme_attach_controller" 00:40:24.970 } 00:40:24.970 EOF 00:40:24.970 )") 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:24.970 "params": { 00:40:24.970 "name": "Nvme0", 00:40:24.970 "trtype": "tcp", 00:40:24.970 "traddr": "10.0.0.2", 00:40:24.970 "adrfam": "ipv4", 00:40:24.970 "trsvcid": "4420", 00:40:24.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:24.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:24.970 "hdgst": false, 00:40:24.970 "ddgst": false 00:40:24.970 }, 00:40:24.970 "method": "bdev_nvme_attach_controller" 00:40:24.970 },{ 00:40:24.970 "params": { 00:40:24.970 "name": "Nvme1", 00:40:24.970 "trtype": "tcp", 00:40:24.970 "traddr": "10.0.0.2", 00:40:24.970 "adrfam": "ipv4", 00:40:24.970 "trsvcid": "4420", 00:40:24.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.970 "hdgst": false, 00:40:24.970 "ddgst": false 00:40:24.970 }, 00:40:24.970 "method": "bdev_nvme_attach_controller" 00:40:24.970 }' 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:24.970 23:08:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.970 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:24.970 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:24.970 fio-3.35 00:40:24.970 Starting 2 threads 00:40:34.960 00:40:34.960 filename0: (groupid=0, jobs=1): err= 0: pid=1016068: Mon Sep 30 23:09:01 2024 00:40:34.960 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10006msec) 00:40:34.960 slat (nsec): min=5406, max=35076, avg=6374.24, stdev=1510.99 00:40:34.960 clat (usec): min=524, max=41968, avg=40823.37, stdev=2581.62 00:40:34.960 lat (usec): min=530, max=42003, avg=40829.74, stdev=2581.68 00:40:34.960 clat percentiles (usec): 00:40:34.960 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:34.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:34.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:34.960 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:40:34.960 | 99.99th=[42206] 00:40:34.960 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=390.40, stdev=13.13, samples=20 00:40:34.960 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:40:34.960 lat (usec) : 750=0.41% 00:40:34.960 lat (msec) : 50=99.59% 00:40:34.960 cpu : usr=95.52%, sys=4.27%, ctx=17, majf=0, minf=53 00:40:34.960 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.960 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.960 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:34.960 filename1: (groupid=0, jobs=1): err= 0: pid=1016069: Mon Sep 30 23:09:01 2024 00:40:34.960 read: IOPS=190, BW=762KiB/s (780kB/s)(7648KiB/10042msec) 00:40:34.960 slat (nsec): min=5387, max=45778, avg=6291.25, stdev=1571.94 00:40:34.960 clat (usec): min=468, max=44010, avg=20989.23, stdev=20160.56 00:40:34.960 lat (usec): min=477, max=44040, avg=20995.52, stdev=20160.54 00:40:34.960 clat percentiles (usec): 00:40:34.960 | 1.00th=[ 619], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:40:34.960 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 947], 60.00th=[41157], 00:40:34.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:34.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:40:34.960 | 99.99th=[43779] 00:40:34.960 bw ( KiB/s): min= 672, max= 832, per=66.23%, avg=763.20, stdev=29.87, samples=20 00:40:34.960 iops : min= 168, max= 208, avg=190.80, stdev= 7.47, samples=20 00:40:34.960 lat (usec) : 500=0.21%, 750=1.57%, 1000=48.22% 00:40:34.960 lat (msec) : 50=50.00% 00:40:34.960 cpu : usr=95.51%, sys=4.28%, ctx=18, majf=0, minf=183 00:40:34.960 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.960 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.960 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:34.960 00:40:34.960 Run status group 0 (all jobs): 00:40:34.960 READ: bw=1152KiB/s (1180kB/s), 392KiB/s-762KiB/s (401kB/s-780kB/s), io=11.3MiB (11.8MB), run=10006-10042msec 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.960 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 00:40:34.961 real 0m11.496s 00:40:34.961 user 0m31.644s 00:40:34.961 sys 0m1.265s 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 ************************************ 00:40:34.961 END TEST fio_dif_1_multi_subsystems 00:40:34.961 ************************************ 00:40:34.961 23:09:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:34.961 23:09:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:34.961 23:09:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 ************************************ 00:40:34.961 START TEST fio_dif_rand_params 00:40:34.961 ************************************ 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 bdev_null0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:34.961 [2024-09-30 23:09:01.953798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:34.961 { 00:40:34.961 "params": { 00:40:34.961 "name": "Nvme$subsystem", 00:40:34.961 "trtype": "$TEST_TRANSPORT", 00:40:34.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:34.961 "adrfam": "ipv4", 00:40:34.961 "trsvcid": "$NVMF_PORT", 00:40:34.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:34.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:34.961 "hdgst": ${hdgst:-false}, 00:40:34.961 "ddgst": ${ddgst:-false} 00:40:34.961 }, 00:40:34.961 "method": "bdev_nvme_attach_controller" 00:40:34.961 } 00:40:34.961 EOF 00:40:34.961 )") 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:40:34.961 23:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:34.961 "params": { 00:40:34.961 "name": "Nvme0", 00:40:34.961 "trtype": "tcp", 00:40:34.961 "traddr": "10.0.0.2", 00:40:34.961 "adrfam": "ipv4", 00:40:34.961 "trsvcid": "4420", 00:40:34.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:34.961 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:34.961 "hdgst": false, 00:40:34.961 "ddgst": false 00:40:34.961 }, 00:40:34.961 "method": "bdev_nvme_attach_controller" 00:40:34.961 }' 00:40:35.223 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:35.223 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:35.224 23:09:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.484 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:35.484 ... 00:40:35.484 fio-3.35 00:40:35.484 Starting 3 threads 00:40:42.185 00:40:42.185 filename0: (groupid=0, jobs=1): err= 0: pid=1018370: Mon Sep 30 23:09:08 2024 00:40:42.185 read: IOPS=321, BW=40.2MiB/s (42.1MB/s)(201MiB/5006msec) 00:40:42.185 slat (nsec): min=5471, max=31275, avg=8502.34, stdev=1656.30 00:40:42.185 clat (usec): min=4243, max=89402, avg=9325.63, stdev=7982.08 00:40:42.185 lat (usec): min=4252, max=89409, avg=9334.13, stdev=7982.13 00:40:42.185 clat percentiles (usec): 00:40:42.185 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6652], 00:40:42.185 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:40:42.186 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11338], 00:40:42.186 | 99.00th=[49021], 99.50th=[50070], 99.90th=[88605], 99.95th=[89654], 00:40:42.186 | 99.99th=[89654] 00:40:42.186 bw ( KiB/s): min=25344, max=49920, per=35.97%, avg=41113.60, stdev=7531.80, samples=10 00:40:42.186 iops : min= 198, max= 390, avg=321.20, stdev=58.84, samples=10 00:40:42.186 lat (msec) : 10=85.07%, 20=11.88%, 50=2.55%, 100=0.50% 00:40:42.186 cpu : usr=94.33%, sys=5.39%, ctx=11, majf=0, minf=78 00:40:42.186 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:42.186 filename0: (groupid=0, jobs=1): err= 0: pid=1018371: Mon Sep 30 23:09:08 2024 00:40:42.186 read: IOPS=412, BW=51.6MiB/s (54.1MB/s)(260MiB/5045msec) 00:40:42.186 slat (nsec): min=5434, max=33097, avg=7787.48, stdev=1758.69 00:40:42.186 clat (usec): min=3443, max=87206, avg=7235.94, stdev=6279.78 00:40:42.186 lat (usec): min=3449, max=87212, avg=7243.73, stdev=6279.92 00:40:42.186 clat percentiles (usec): 00:40:42.186 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5342], 00:40:42.186 | 30.00th=[ 5604], 40.00th=[ 5932], 50.00th=[ 6259], 60.00th=[ 6587], 00:40:42.186 | 70.00th=[ 6980], 80.00th=[ 7439], 90.00th=[ 8094], 95.00th=[ 8586], 00:40:42.186 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:40:42.186 | 99.99th=[87557] 00:40:42.186 bw ( KiB/s): min=46592, max=60160, per=46.60%, avg=53263.00, stdev=4053.82, samples=10 00:40:42.186 iops : min= 364, max= 470, avg=416.10, stdev=31.67, samples=10 00:40:42.186 lat (msec) : 4=1.30%, 10=96.30%, 20=0.19%, 50=2.16%, 100=0.05% 00:40:42.186 cpu : usr=94.17%, sys=5.59%, ctx=6, majf=0, minf=120 00:40:42.186 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:42.186 filename0: (groupid=0, jobs=1): err= 0: pid=1018372: Mon Sep 30 23:09:08 2024 00:40:42.186 read: IOPS=161, BW=20.2MiB/s (21.2MB/s)(102MiB/5026msec) 00:40:42.186 slat (nsec): min=5661, max=30907, avg=7804.71, stdev=1641.43 00:40:42.186 clat (msec): min=4, max=131, avg=18.51, stdev=20.64 00:40:42.186 lat (msec): min=4, max=131, avg=18.52, stdev=20.64 00:40:42.186 clat percentiles (msec): 00:40:42.186 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:40:42.186 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:40:42.186 | 70.00th=[ 11], 80.00th=[ 48], 90.00th=[ 51], 95.00th=[ 52], 00:40:42.186 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 132], 99.95th=[ 132], 00:40:42.186 | 99.99th=[ 132] 00:40:42.186 bw ( KiB/s): min=11776, max=26624, per=18.17%, avg=20766.70, stdev=4716.14, samples=10 00:40:42.186 iops : min= 92, max= 208, avg=162.20, stdev=36.80, samples=10 00:40:42.186 lat (msec) : 10=69.04%, 20=9.46%, 50=12.04%, 100=9.34%, 250=0.12% 00:40:42.186 cpu : usr=95.58%, sys=4.16%, ctx=8, majf=0, minf=68 00:40:42.186 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.186 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:42.186 00:40:42.186 Run status group 0 (all jobs): 00:40:42.186 READ: bw=112MiB/s (117MB/s), 20.2MiB/s-51.6MiB/s (21.2MB/s-54.1MB/s), io=563MiB (590MB), run=5006-5045msec 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:42.186 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 bdev_null0 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 [2024-09-30 23:09:08.235138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 bdev_null1 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 bdev_null2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.187 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.188 { 00:40:42.188 "params": { 00:40:42.188 "name": "Nvme$subsystem", 00:40:42.188 "trtype": "$TEST_TRANSPORT", 00:40:42.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.188 "adrfam": "ipv4", 00:40:42.188 "trsvcid": "$NVMF_PORT", 00:40:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.188 "hdgst": ${hdgst:-false}, 00:40:42.188 "ddgst": ${ddgst:-false} 00:40:42.188 }, 00:40:42.188 "method": "bdev_nvme_attach_controller" 00:40:42.188 } 00:40:42.188 EOF 00:40:42.188 )") 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.188 { 00:40:42.188 "params": { 00:40:42.188 "name": "Nvme$subsystem", 00:40:42.188 "trtype": "$TEST_TRANSPORT", 00:40:42.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.188 "adrfam": "ipv4", 00:40:42.188 "trsvcid": "$NVMF_PORT", 00:40:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.188 "hdgst": ${hdgst:-false}, 00:40:42.188 "ddgst": ${ddgst:-false} 00:40:42.188 }, 00:40:42.188 "method": "bdev_nvme_attach_controller" 00:40:42.188 } 00:40:42.188 EOF 00:40:42.188 )") 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.188 { 00:40:42.188 "params": { 00:40:42.188 "name": "Nvme$subsystem", 00:40:42.188 "trtype": "$TEST_TRANSPORT", 00:40:42.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.188 "adrfam": "ipv4", 00:40:42.188 "trsvcid": "$NVMF_PORT", 00:40:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.188 "hdgst": ${hdgst:-false}, 00:40:42.188 "ddgst": ${ddgst:-false} 00:40:42.188 }, 00:40:42.188 "method": "bdev_nvme_attach_controller" 00:40:42.188 } 00:40:42.188 EOF 00:40:42.188 )") 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:40:42.188 23:09:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:42.188 "params": { 00:40:42.188 "name": "Nvme0", 00:40:42.188 "trtype": "tcp", 00:40:42.188 "traddr": "10.0.0.2", 00:40:42.188 "adrfam": "ipv4", 00:40:42.188 "trsvcid": "4420", 00:40:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:42.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:42.188 "hdgst": false, 00:40:42.188 "ddgst": false 00:40:42.188 }, 00:40:42.188 "method": "bdev_nvme_attach_controller" 00:40:42.188 },{ 00:40:42.188 "params": { 00:40:42.188 "name": "Nvme1", 00:40:42.188 "trtype": "tcp", 00:40:42.188 "traddr": "10.0.0.2", 00:40:42.188 "adrfam": "ipv4", 00:40:42.189 "trsvcid": "4420", 00:40:42.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:42.189 "hdgst": false, 00:40:42.189 "ddgst": false 00:40:42.189 }, 00:40:42.189 "method": "bdev_nvme_attach_controller" 00:40:42.189 },{ 00:40:42.189 "params": { 00:40:42.189 "name": "Nvme2", 00:40:42.189 "trtype": "tcp", 00:40:42.189 "traddr": "10.0.0.2", 00:40:42.189 "adrfam": "ipv4", 00:40:42.189 "trsvcid": "4420", 00:40:42.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:42.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:42.189 "hdgst": false, 00:40:42.189 "ddgst": false 00:40:42.189 }, 00:40:42.189 "method": "bdev_nvme_attach_controller" 00:40:42.189 }' 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:42.189 23:09:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:42.189 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:42.189 ... 00:40:42.189 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:42.189 ... 00:40:42.189 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:42.189 ... 00:40:42.189 fio-3.35 00:40:42.189 Starting 24 threads 00:40:54.422 00:40:54.422 filename0: (groupid=0, jobs=1): err= 0: pid=1020222: Mon Sep 30 23:09:19 2024 00:40:54.422 read: IOPS=682, BW=2729KiB/s (2794kB/s)(26.6MiB/10001msec) 00:40:54.422 slat (nsec): min=5463, max=97348, avg=10738.41, stdev=5953.26 00:40:54.422 clat (usec): min=10975, max=37437, avg=23369.92, stdev=2571.16 00:40:54.422 lat (usec): min=10983, max=37461, avg=23380.66, stdev=2571.88 00:40:54.422 clat percentiles (usec): 00:40:54.422 | 1.00th=[14091], 5.00th=[16909], 10.00th=[22414], 20.00th=[23462], 00:40:54.422 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:54.422 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:54.422 | 99.00th=[31851], 99.50th=[34341], 99.90th=[36439], 99.95th=[36963], 00:40:54.422 | 99.99th=[37487] 00:40:54.422 bw ( KiB/s): min= 2688, max= 3328, per=4.23%, avg=2730.95, stdev=150.10, samples=19 00:40:54.422 iops : min= 672, max= 832, avg=682.74, stdev=37.52, samples=19 00:40:54.422 lat (msec) : 20=8.31%, 50=91.69% 00:40:54.422 cpu : usr=98.74%, sys=0.90%, ctx=34, majf=0, minf=26 00:40:54.422 IO depths : 1=3.5%, 2=9.0%, 4=23.0%, 8=55.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:40:54.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.422 filename0: (groupid=0, jobs=1): err= 0: pid=1020223: Mon Sep 30 23:09:19 2024 00:40:54.422 read: IOPS=680, BW=2721KiB/s (2786kB/s)(26.6MiB/10005msec) 00:40:54.422 slat (nsec): min=5554, max=80034, avg=19319.64, stdev=14056.85 00:40:54.422 clat (usec): min=12060, max=36960, avg=23350.75, stdev=2886.50 00:40:54.422 lat (usec): min=12089, max=36966, avg=23370.06, stdev=2888.22 00:40:54.422 clat percentiles (usec): 00:40:54.422 | 1.00th=[14353], 5.00th=[16909], 10.00th=[20055], 20.00th=[23200], 00:40:54.422 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:40:54.422 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[27395], 00:40:54.422 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[36963], 00:40:54.422 | 99.99th=[36963] 00:40:54.422 bw ( KiB/s): min= 2560, max= 2976, per=4.22%, avg=2724.21, stdev=127.23, samples=19 00:40:54.422 iops : min= 640, max= 744, avg=681.05, stdev=31.81, samples=19 00:40:54.422 lat (msec) : 20=9.76%, 50=90.24% 00:40:54.422 cpu : usr=98.73%, sys=0.89%, ctx=78, majf=0, minf=21 00:40:54.422 IO depths : 1=4.2%, 2=8.7%, 4=19.5%, 8=58.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:40:54.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 complete : 0=0.0%, 4=92.6%, 8=2.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 issued rwts: total=6806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.422 filename0: (groupid=0, jobs=1): err= 0: pid=1020224: Mon Sep 30 23:09:19 2024 00:40:54.422 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10008msec) 00:40:54.422 slat (nsec): min=5600, max=81260, avg=21245.55, stdev=12632.78 00:40:54.422 clat (usec): min=7201, max=36612, avg=23625.18, stdev=1576.90 00:40:54.422 lat (usec): min=7208, max=36628, avg=23646.42, stdev=1577.36 00:40:54.422 clat percentiles (usec): 00:40:54.422 | 1.00th=[16450], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:54.422 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.422 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.422 | 99.00th=[25035], 99.50th=[31327], 99.90th=[36439], 99.95th=[36439], 00:40:54.422 | 99.99th=[36439] 00:40:54.422 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2674.53, stdev=59.21, samples=19 00:40:54.422 iops : min= 640, max= 704, avg=668.63, stdev=14.80, samples=19 00:40:54.422 lat (msec) : 10=0.48%, 20=0.57%, 50=98.96% 00:40:54.422 cpu : usr=98.84%, sys=0.87%, ctx=13, majf=0, minf=26 00:40:54.422 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:54.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.422 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.422 filename0: (groupid=0, jobs=1): err= 0: pid=1020225: Mon Sep 30 23:09:19 2024 00:40:54.422 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10009msec) 00:40:54.422 slat (nsec): min=5412, max=80133, avg=18728.22, stdev=12546.32 00:40:54.422 clat (usec): min=8635, max=40473, avg=23536.45, stdev=2877.63 00:40:54.422 lat (usec): min=8640, max=40488, avg=23555.17, stdev=2879.10 00:40:54.422 clat percentiles (usec): 00:40:54.422 | 1.00th=[14484], 5.00th=[17171], 10.00th=[22676], 20.00th=[23200], 00:40:54.422 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.422 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[27395], 00:40:54.422 | 99.00th=[32113], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:40:54.423 | 99.99th=[40633] 00:40:54.423 bw ( KiB/s): min= 2560, max= 3216, per=4.17%, avg=2694.74, stdev=149.46, samples=19 00:40:54.423 iops : min= 640, max= 804, avg=673.68, stdev=37.36, samples=19 00:40:54.423 lat (msec) : 10=0.21%, 20=7.63%, 50=92.16% 00:40:54.423 cpu : usr=98.52%, sys=1.00%, ctx=147, majf=0, minf=38 00:40:54.423 IO depths : 1=4.1%, 2=9.0%, 4=21.5%, 8=56.8%, 16=8.7%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename0: (groupid=0, jobs=1): err= 0: pid=1020226: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=678, BW=2713KiB/s (2778kB/s)(26.5MiB/10004msec) 00:40:54.423 slat (nsec): min=5563, max=82105, avg=14406.32, stdev=10864.85 00:40:54.423 clat (usec): min=1918, max=25418, avg=23478.13, stdev=2132.43 00:40:54.423 lat (usec): min=1935, max=25425, avg=23492.54, stdev=2132.23 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[12256], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.423 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:54.423 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.423 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:40:54.423 | 99.99th=[25297] 00:40:54.423 bw ( KiB/s): min= 2688, max= 3200, per=4.20%, avg=2714.95, stdev=117.46, samples=19 00:40:54.423 iops : min= 672, max= 800, avg=678.74, stdev=29.37, samples=19 00:40:54.423 lat (msec) : 2=0.07%, 4=0.40%, 10=0.47%, 20=1.42%, 50=97.64% 00:40:54.423 cpu : usr=98.19%, sys=1.19%, ctx=255, majf=0, minf=54 00:40:54.423 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename0: (groupid=0, jobs=1): err= 0: pid=1020227: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:40:54.423 slat (nsec): min=5550, max=68369, avg=9899.57, stdev=7182.58 00:40:54.423 clat (usec): min=13012, max=35124, avg=23747.62, stdev=870.54 00:40:54.423 lat (usec): min=13019, max=35131, avg=23757.52, stdev=869.88 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:40:54.423 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:54.423 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:40:54.423 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25822], 99.95th=[33817], 00:40:54.423 | 99.99th=[34866] 00:40:54.423 bw ( KiB/s): min= 2672, max= 2704, per=4.16%, avg=2687.68, stdev= 5.51, samples=19 00:40:54.423 iops : min= 668, max= 676, avg=671.89, stdev= 1.41, samples=19 00:40:54.423 lat (msec) : 20=0.80%, 50=99.20% 00:40:54.423 cpu : usr=98.79%, sys=0.83%, ctx=85, majf=0, minf=26 00:40:54.423 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename0: (groupid=0, jobs=1): err= 0: pid=1020228: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:40:54.423 slat (nsec): min=5418, max=63419, avg=17187.30, stdev=9296.83 00:40:54.423 clat (usec): min=4199, max=54018, avg=23669.14, stdev=2058.71 00:40:54.423 lat (usec): min=4205, max=54045, avg=23686.33, stdev=2059.02 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[15795], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.423 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:54.423 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.423 | 99.00th=[26870], 99.50th=[31851], 99.90th=[45351], 99.95th=[45351], 00:40:54.423 | 99.99th=[54264] 00:40:54.423 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2674.53, stdev=58.73, samples=19 00:40:54.423 iops : min= 608, max= 672, avg=668.63, stdev=14.68, samples=19 00:40:54.423 lat (msec) : 10=0.58%, 20=0.85%, 50=98.54%, 100=0.03% 00:40:54.423 cpu : usr=98.49%, sys=1.03%, ctx=153, majf=0, minf=46 00:40:54.423 IO depths : 1=4.4%, 2=10.6%, 4=24.9%, 8=52.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename0: (groupid=0, jobs=1): err= 0: pid=1020229: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=677, BW=2710KiB/s (2775kB/s)(26.6MiB/10059msec) 00:40:54.423 slat (nsec): min=5590, max=90859, avg=17913.95, stdev=12250.27 00:40:54.423 clat (usec): min=829, max=58322, avg=23369.86, stdev=2969.19 00:40:54.423 lat (usec): min=848, max=58329, avg=23387.77, stdev=2969.29 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[ 7767], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.423 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.423 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.423 | 99.00th=[25822], 99.50th=[33817], 99.90th=[39584], 99.95th=[58459], 00:40:54.423 | 99.99th=[58459] 00:40:54.423 bw ( KiB/s): min= 2560, max= 3328, per=4.22%, avg=2724.00, stdev=149.18, samples=20 00:40:54.423 iops : min= 640, max= 832, avg=681.00, stdev=37.30, samples=20 00:40:54.423 lat (usec) : 1000=0.03% 00:40:54.423 lat (msec) : 2=0.38%, 4=0.40%, 10=0.60%, 20=2.47%, 50=96.05% 00:40:54.423 lat (msec) : 100=0.07% 00:40:54.423 cpu : usr=98.67%, sys=0.87%, ctx=64, majf=0, minf=58 00:40:54.423 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.4%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename1: (groupid=0, jobs=1): err= 0: pid=1020230: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=670, BW=2681KiB/s (2746kB/s)(26.2MiB/10001msec) 00:40:54.423 slat (nsec): min=5556, max=81971, avg=19766.18, stdev=14351.00 00:40:54.423 clat (usec): min=14415, max=33566, avg=23679.27, stdev=859.59 00:40:54.423 lat (usec): min=14424, max=33573, avg=23699.03, stdev=858.99 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:54.423 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.423 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:40:54.423 | 99.00th=[25035], 99.50th=[25297], 99.90th=[32113], 99.95th=[33162], 00:40:54.423 | 99.99th=[33817] 00:40:54.423 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2681.26, stdev=52.07, samples=19 00:40:54.423 iops : min= 640, max= 704, avg=670.32, stdev=13.02, samples=19 00:40:54.423 lat (msec) : 20=0.51%, 50=99.49% 00:40:54.423 cpu : usr=98.77%, sys=0.89%, ctx=66, majf=0, minf=24 00:40:54.423 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.423 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.423 filename1: (groupid=0, jobs=1): err= 0: pid=1020231: Mon Sep 30 23:09:19 2024 00:40:54.423 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.4MiB/10015msec) 00:40:54.423 slat (nsec): min=5585, max=97381, avg=23415.34, stdev=15689.80 00:40:54.423 clat (usec): min=8042, max=25376, avg=23531.23, stdev=1386.17 00:40:54.423 lat (usec): min=8069, max=25383, avg=23554.65, stdev=1385.84 00:40:54.423 clat percentiles (usec): 00:40:54.423 | 1.00th=[15401], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.424 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.424 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:40:54.424 | 99.99th=[25297] 00:40:54.424 bw ( KiB/s): min= 2560, max= 2816, per=4.17%, avg=2694.74, stdev=79.52, samples=19 00:40:54.424 iops : min= 640, max= 704, avg=673.68, stdev=19.88, samples=19 00:40:54.424 lat (msec) : 10=0.24%, 20=1.42%, 50=98.34% 00:40:54.424 cpu : usr=98.83%, sys=0.86%, ctx=32, majf=0, minf=35 00:40:54.424 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020232: Mon Sep 30 23:09:19 2024 00:40:54.424 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:40:54.424 slat (nsec): min=5568, max=84668, avg=18915.44, stdev=13654.59 00:40:54.424 clat (usec): min=13782, max=33970, avg=23672.55, stdev=851.27 00:40:54.424 lat (usec): min=13795, max=33977, avg=23691.46, stdev=850.54 00:40:54.424 clat percentiles (usec): 00:40:54.424 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.424 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.424 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25822], 99.95th=[33162], 00:40:54.424 | 99.99th=[33817] 00:40:54.424 bw ( KiB/s): min= 2688, max= 2688, per=4.16%, avg=2688.00, stdev= 0.00, samples=19 00:40:54.424 iops : min= 672, max= 672, avg=672.00, stdev= 0.00, samples=19 00:40:54.424 lat (msec) : 20=0.77%, 50=99.23% 00:40:54.424 cpu : usr=98.32%, sys=1.01%, ctx=141, majf=0, minf=36 00:40:54.424 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020233: Mon Sep 30 23:09:19 2024 00:40:54.424 read: IOPS=688, BW=2753KiB/s (2819kB/s)(26.9MiB/10015msec) 00:40:54.424 slat (nsec): min=5550, max=88349, avg=13768.54, stdev=12776.03 00:40:54.424 clat (usec): min=8033, max=36356, avg=23143.71, stdev=2490.98 00:40:54.424 lat (usec): min=8055, max=36362, avg=23157.48, stdev=2491.69 00:40:54.424 clat percentiles (usec): 00:40:54.424 | 1.00th=[13566], 5.00th=[16909], 10.00th=[22676], 20.00th=[23200], 00:40:54.424 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.424 | 99.00th=[25035], 99.50th=[32113], 99.90th=[36439], 99.95th=[36439], 00:40:54.424 | 99.99th=[36439] 00:40:54.424 bw ( KiB/s): min= 2560, max= 3248, per=4.26%, avg=2753.68, stdev=150.18, samples=19 00:40:54.424 iops : min= 640, max= 812, avg=688.42, stdev=37.54, samples=19 00:40:54.424 lat (msec) : 10=0.32%, 20=8.21%, 50=91.47% 00:40:54.424 cpu : usr=98.90%, sys=0.78%, ctx=51, majf=0, minf=32 00:40:54.424 IO depths : 1=5.6%, 2=11.3%, 4=23.4%, 8=52.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020234: Mon Sep 30 23:09:19 2024 00:40:54.424 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10004msec) 00:40:54.424 slat (nsec): min=5684, max=96131, avg=20350.92, stdev=12747.98 00:40:54.424 clat (usec): min=4721, max=46249, avg=23619.95, stdev=1951.49 00:40:54.424 lat (usec): min=4729, max=46265, avg=23640.30, stdev=1951.62 00:40:54.424 clat percentiles (usec): 00:40:54.424 | 1.00th=[15664], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.424 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.424 | 99.00th=[25035], 99.50th=[32900], 99.90th=[46400], 99.95th=[46400], 00:40:54.424 | 99.99th=[46400] 00:40:54.424 bw ( KiB/s): min= 2436, max= 2688, per=4.14%, avg=2674.74, stdev=57.81, samples=19 00:40:54.424 iops : min= 609, max= 672, avg=668.68, stdev=14.45, samples=19 00:40:54.424 lat (msec) : 10=0.71%, 20=0.30%, 50=98.99% 00:40:54.424 cpu : usr=98.59%, sys=0.94%, ctx=74, majf=0, minf=44 00:40:54.424 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020235: Mon Sep 30 23:09:19 2024 00:40:54.424 read: IOPS=675, BW=2704KiB/s (2769kB/s)(26.5MiB/10044msec) 00:40:54.424 slat (nsec): min=5424, max=88382, avg=17608.13, stdev=14557.14 00:40:54.424 clat (usec): min=5515, max=45278, avg=23490.48, stdev=3511.42 00:40:54.424 lat (usec): min=5521, max=45290, avg=23508.09, stdev=3512.34 00:40:54.424 clat percentiles (usec): 00:40:54.424 | 1.00th=[14615], 5.00th=[17695], 10.00th=[19530], 20.00th=[22938], 00:40:54.424 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27132], 95.00th=[28705], 00:40:54.424 | 99.00th=[35914], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:40:54.424 | 99.99th=[45351] 00:40:54.424 bw ( KiB/s): min= 2432, max= 2960, per=4.18%, avg=2701.89, stdev=120.32, samples=19 00:40:54.424 iops : min= 608, max= 740, avg=675.47, stdev=30.08, samples=19 00:40:54.424 lat (msec) : 10=0.24%, 20=13.05%, 50=86.71% 00:40:54.424 cpu : usr=98.43%, sys=1.00%, ctx=202, majf=0, minf=28 00:40:54.424 IO depths : 1=1.9%, 2=4.0%, 4=10.5%, 8=70.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=90.6%, 8=6.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020236: Mon Sep 30 23:09:19 2024 00:40:54.424 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10008msec) 00:40:54.424 slat (nsec): min=5562, max=79548, avg=19450.74, stdev=12090.10 00:40:54.424 clat (usec): min=10360, max=34678, avg=23690.93, stdev=1110.88 00:40:54.424 lat (usec): min=10366, max=34687, avg=23710.38, stdev=1111.19 00:40:54.424 clat percentiles (usec): 00:40:54.424 | 1.00th=[20055], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.424 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.424 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.424 | 99.00th=[25560], 99.50th=[28705], 99.90th=[34866], 99.95th=[34866], 00:40:54.424 | 99.99th=[34866] 00:40:54.424 bw ( KiB/s): min= 2560, max= 2752, per=4.14%, avg=2677.84, stdev=44.24, samples=19 00:40:54.424 iops : min= 640, max= 688, avg=669.42, stdev=11.05, samples=19 00:40:54.424 lat (msec) : 20=0.95%, 50=99.05% 00:40:54.424 cpu : usr=98.86%, sys=0.86%, ctx=14, majf=0, minf=34 00:40:54.424 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:54.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.424 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.424 filename1: (groupid=0, jobs=1): err= 0: pid=1020237: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=701, BW=2805KiB/s (2873kB/s)(27.4MiB/10018msec) 00:40:54.425 slat (nsec): min=5553, max=76326, avg=13577.80, stdev=10869.36 00:40:54.425 clat (usec): min=11208, max=41381, avg=22723.22, stdev=4583.98 00:40:54.425 lat (usec): min=11216, max=41395, avg=22736.80, stdev=4586.41 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[13960], 5.00th=[15401], 10.00th=[16188], 20.00th=[18744], 00:40:54.425 | 30.00th=[20579], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[24249], 90.00th=[28181], 95.00th=[31327], 00:40:54.425 | 99.00th=[37487], 99.50th=[38536], 99.90th=[40633], 99.95th=[41157], 00:40:54.425 | 99.99th=[41157] 00:40:54.425 bw ( KiB/s): min= 2608, max= 3136, per=4.35%, avg=2809.26, stdev=160.38, samples=19 00:40:54.425 iops : min= 652, max= 784, avg=702.32, stdev=40.10, samples=19 00:40:54.425 lat (msec) : 20=26.27%, 50=73.73% 00:40:54.425 cpu : usr=98.70%, sys=0.97%, ctx=68, majf=0, minf=34 00:40:54.425 IO depths : 1=1.9%, 2=4.0%, 4=11.4%, 8=70.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=7026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020238: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=669, BW=2679KiB/s (2744kB/s)(26.2MiB/10008msec) 00:40:54.425 slat (nsec): min=5569, max=80394, avg=18502.84, stdev=11877.04 00:40:54.425 clat (usec): min=8474, max=40215, avg=23722.84, stdev=1273.00 00:40:54.425 lat (usec): min=8481, max=40237, avg=23741.34, stdev=1272.76 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.425 | 99.00th=[25035], 99.50th=[26084], 99.90th=[40109], 99.95th=[40109], 00:40:54.425 | 99.99th=[40109] 00:40:54.425 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2674.79, stdev=58.20, samples=19 00:40:54.425 iops : min= 640, max= 704, avg=668.68, stdev=14.58, samples=19 00:40:54.425 lat (msec) : 10=0.03%, 20=0.60%, 50=99.37% 00:40:54.425 cpu : usr=98.86%, sys=0.87%, ctx=14, majf=0, minf=32 00:40:54.425 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020239: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10013msec) 00:40:54.425 slat (nsec): min=5564, max=75760, avg=16583.65, stdev=11165.32 00:40:54.425 clat (usec): min=11695, max=36359, avg=23621.60, stdev=1428.41 00:40:54.425 lat (usec): min=11716, max=36366, avg=23638.18, stdev=1428.19 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[16057], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.425 | 99.00th=[25297], 99.50th=[28967], 99.90th=[36439], 99.95th=[36439], 00:40:54.425 | 99.99th=[36439] 00:40:54.425 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2690.53, stdev=61.34, samples=19 00:40:54.425 iops : min= 640, max= 704, avg=672.63, stdev=15.33, samples=19 00:40:54.425 lat (msec) : 20=2.17%, 50=97.83% 00:40:54.425 cpu : usr=98.92%, sys=0.82%, ctx=14, majf=0, minf=46 00:40:54.425 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020240: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:40:54.425 slat (nsec): min=5582, max=81264, avg=21367.12, stdev=12983.55 00:40:54.425 clat (usec): min=6955, max=36572, avg=23626.72, stdev=1390.98 00:40:54.425 lat (usec): min=6977, max=36590, avg=23648.09, stdev=1390.58 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.425 | 99.00th=[25035], 99.50th=[25822], 99.90th=[36439], 99.95th=[36439], 00:40:54.425 | 99.99th=[36439] 00:40:54.425 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2674.53, stdev=58.97, samples=19 00:40:54.425 iops : min= 640, max= 704, avg=668.63, stdev=14.74, samples=19 00:40:54.425 lat (msec) : 10=0.37%, 20=0.49%, 50=99.14% 00:40:54.425 cpu : usr=98.93%, sys=0.79%, ctx=21, majf=0, minf=37 00:40:54.425 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020241: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=670, BW=2684KiB/s (2748kB/s)(26.2MiB/10001msec) 00:40:54.425 slat (nsec): min=5602, max=89094, avg=22171.95, stdev=14783.63 00:40:54.425 clat (usec): min=15167, max=36468, avg=23621.55, stdev=1183.11 00:40:54.425 lat (usec): min=15176, max=36476, avg=23643.72, stdev=1183.32 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[18482], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.425 | 99.00th=[25560], 99.50th=[31589], 99.90th=[33424], 99.95th=[36439], 00:40:54.425 | 99.99th=[36439] 00:40:54.425 bw ( KiB/s): min= 2560, max= 2869, per=4.16%, avg=2684.05, stdev=60.20, samples=19 00:40:54.425 iops : min= 640, max= 717, avg=671.00, stdev=15.01, samples=19 00:40:54.425 lat (msec) : 20=1.40%, 50=98.60% 00:40:54.425 cpu : usr=98.11%, sys=1.20%, ctx=171, majf=0, minf=32 00:40:54.425 IO depths : 1=6.1%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020242: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.4MiB/10015msec) 00:40:54.425 slat (nsec): min=5551, max=82248, avg=21027.16, stdev=15028.47 00:40:54.425 clat (usec): min=8281, max=36306, avg=23551.88, stdev=1599.23 00:40:54.425 lat (usec): min=8297, max=36313, avg=23572.91, stdev=1599.53 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[15270], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.425 | 99.00th=[25035], 99.50th=[30278], 99.90th=[33817], 99.95th=[35914], 00:40:54.425 | 99.99th=[36439] 00:40:54.425 bw ( KiB/s): min= 2560, max= 2816, per=4.17%, avg=2694.74, stdev=52.07, samples=19 00:40:54.425 iops : min= 640, max= 704, avg=673.68, stdev=13.02, samples=19 00:40:54.425 lat (msec) : 10=0.24%, 20=1.78%, 50=97.99% 00:40:54.425 cpu : usr=98.84%, sys=0.76%, ctx=85, majf=0, minf=45 00:40:54.425 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:40:54.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.425 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.425 filename2: (groupid=0, jobs=1): err= 0: pid=1020243: Mon Sep 30 23:09:19 2024 00:40:54.425 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:40:54.425 slat (nsec): min=5616, max=72266, avg=19310.44, stdev=11389.78 00:40:54.425 clat (usec): min=4725, max=44927, avg=23635.71, stdev=1892.00 00:40:54.425 lat (usec): min=4751, max=44954, avg=23655.02, stdev=1892.32 00:40:54.425 clat percentiles (usec): 00:40:54.425 | 1.00th=[19268], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:54.425 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.425 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:54.426 | 99.00th=[25035], 99.50th=[30540], 99.90th=[44827], 99.95th=[44827], 00:40:54.426 | 99.99th=[44827] 00:40:54.426 bw ( KiB/s): min= 2436, max= 2688, per=4.14%, avg=2674.74, stdev=57.81, samples=19 00:40:54.426 iops : min= 609, max= 672, avg=668.68, stdev=14.45, samples=19 00:40:54.426 lat (msec) : 10=0.65%, 20=0.36%, 50=98.99% 00:40:54.426 cpu : usr=98.58%, sys=0.97%, ctx=64, majf=0, minf=42 00:40:54.426 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:54.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.426 filename2: (groupid=0, jobs=1): err= 0: pid=1020244: Mon Sep 30 23:09:19 2024 00:40:54.426 read: IOPS=672, BW=2690KiB/s (2755kB/s)(26.3MiB/10004msec) 00:40:54.426 slat (nsec): min=5552, max=82203, avg=12798.36, stdev=10949.57 00:40:54.426 clat (usec): min=7525, max=56302, avg=23735.13, stdev=2149.95 00:40:54.426 lat (usec): min=7531, max=56326, avg=23747.93, stdev=2150.09 00:40:54.426 clat percentiles (usec): 00:40:54.426 | 1.00th=[16450], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:40:54.426 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:54.426 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:54.426 | 99.00th=[28967], 99.50th=[33162], 99.90th=[47449], 99.95th=[47449], 00:40:54.426 | 99.99th=[56361] 00:40:54.426 bw ( KiB/s): min= 2480, max= 2768, per=4.16%, avg=2686.32, stdev=59.36, samples=19 00:40:54.426 iops : min= 620, max= 692, avg=671.58, stdev=14.84, samples=19 00:40:54.426 lat (msec) : 10=0.16%, 20=3.23%, 50=96.57%, 100=0.04% 00:40:54.426 cpu : usr=98.97%, sys=0.75%, ctx=15, majf=0, minf=45 00:40:54.426 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=80.3%, 16=18.0%, 32=0.0%, >=64=0.0% 00:40:54.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 complete : 0=0.0%, 4=89.6%, 8=9.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 issued rwts: total=6728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.426 filename2: (groupid=0, jobs=1): err= 0: pid=1020245: Mon Sep 30 23:09:19 2024 00:40:54.426 read: IOPS=689, BW=2760KiB/s (2826kB/s)(27.0MiB/10013msec) 00:40:54.426 slat (nsec): min=5510, max=64356, avg=11120.70, stdev=8936.57 00:40:54.426 clat (usec): min=10553, max=40607, avg=23118.42, stdev=3934.92 00:40:54.426 lat (usec): min=10560, max=40619, avg=23129.54, stdev=3935.89 00:40:54.426 clat percentiles (usec): 00:40:54.426 | 1.00th=[13304], 5.00th=[15926], 10.00th=[18220], 20.00th=[20055], 00:40:54.426 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:54.426 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27657], 95.00th=[29754], 00:40:54.426 | 99.00th=[35390], 99.50th=[36439], 99.90th=[40109], 99.95th=[40633], 00:40:54.426 | 99.99th=[40633] 00:40:54.426 bw ( KiB/s): min= 2624, max= 3008, per=4.27%, avg=2757.05, stdev=96.01, samples=19 00:40:54.426 iops : min= 656, max= 752, avg=689.26, stdev=24.00, samples=19 00:40:54.426 lat (msec) : 20=20.47%, 50=79.53% 00:40:54.426 cpu : usr=98.83%, sys=0.78%, ctx=70, majf=0, minf=32 00:40:54.426 IO depths : 1=0.9%, 2=1.9%, 4=6.9%, 8=76.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:40:54.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 complete : 0=0.0%, 4=89.7%, 8=6.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.426 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:54.426 00:40:54.426 Run status group 0 (all jobs): 00:40:54.426 READ: bw=63.1MiB/s (66.1MB/s), 2679KiB/s-2805KiB/s (2744kB/s-2873kB/s), io=635MiB (665MB), run=10001-10059msec 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 bdev_null0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:54.426 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 [2024-09-30 23:09:20.126057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 bdev_null1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:54.427 { 00:40:54.427 "params": { 00:40:54.427 "name": "Nvme$subsystem", 00:40:54.427 "trtype": "$TEST_TRANSPORT", 00:40:54.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.427 "adrfam": "ipv4", 00:40:54.427 "trsvcid": "$NVMF_PORT", 00:40:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.427 "hdgst": ${hdgst:-false}, 00:40:54.427 "ddgst": ${ddgst:-false} 00:40:54.427 }, 00:40:54.427 "method": "bdev_nvme_attach_controller" 00:40:54.427 } 00:40:54.427 EOF 00:40:54.427 )") 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:54.427 { 00:40:54.427 "params": { 00:40:54.427 "name": "Nvme$subsystem", 00:40:54.427 "trtype": "$TEST_TRANSPORT", 00:40:54.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.427 "adrfam": "ipv4", 00:40:54.427 "trsvcid": "$NVMF_PORT", 00:40:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.427 "hdgst": ${hdgst:-false}, 00:40:54.427 "ddgst": ${ddgst:-false} 00:40:54.427 }, 00:40:54.427 "method": "bdev_nvme_attach_controller" 00:40:54.427 } 00:40:54.427 EOF 00:40:54.427 )") 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:54.427 "params": { 00:40:54.427 "name": "Nvme0", 00:40:54.427 "trtype": "tcp", 00:40:54.427 "traddr": "10.0.0.2", 00:40:54.427 "adrfam": "ipv4", 00:40:54.427 "trsvcid": "4420", 00:40:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:54.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:54.427 "hdgst": false, 00:40:54.427 "ddgst": false 00:40:54.427 }, 00:40:54.427 "method": "bdev_nvme_attach_controller" 00:40:54.427 },{ 00:40:54.427 "params": { 00:40:54.427 "name": "Nvme1", 00:40:54.427 "trtype": "tcp", 00:40:54.427 "traddr": "10.0.0.2", 00:40:54.427 "adrfam": "ipv4", 00:40:54.427 "trsvcid": "4420", 00:40:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:54.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:54.427 "hdgst": false, 00:40:54.427 "ddgst": false 00:40:54.427 }, 00:40:54.427 "method": "bdev_nvme_attach_controller" 00:40:54.427 }' 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:54.427 23:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.427 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:54.427 ... 00:40:54.427 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:54.427 ... 00:40:54.427 fio-3.35 00:40:54.427 Starting 4 threads 00:40:59.725 00:40:59.725 filename0: (groupid=0, jobs=1): err= 0: pid=1022652: Mon Sep 30 23:09:26 2024 00:40:59.725 read: IOPS=3091, BW=24.2MiB/s (25.3MB/s)(121MiB/5002msec) 00:40:59.725 slat (nsec): min=5411, max=56721, avg=8372.65, stdev=2176.54 00:40:59.725 clat (usec): min=712, max=4392, avg=2566.26, stdev=316.30 00:40:59.725 lat (usec): min=728, max=4400, avg=2574.63, stdev=316.00 00:40:59.725 clat percentiles (usec): 00:40:59.726 | 1.00th=[ 1614], 5.00th=[ 2008], 10.00th=[ 2180], 20.00th=[ 2376], 00:40:59.726 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:59.726 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2933], 00:40:59.726 | 99.00th=[ 3523], 99.50th=[ 3621], 99.90th=[ 3916], 99.95th=[ 4228], 00:40:59.726 | 99.99th=[ 4359] 00:40:59.726 bw ( KiB/s): min=24128, max=25676, per=25.97%, avg=24659.11, stdev=498.44, samples=9 00:40:59.726 iops : min= 3016, max= 3209, avg=3082.33, stdev=62.18, samples=9 00:40:59.726 lat (usec) : 750=0.02%, 1000=0.21% 00:40:59.726 lat (msec) : 2=4.24%, 4=95.45%, 10=0.08% 00:40:59.726 cpu : usr=96.22%, sys=3.38%, ctx=124, majf=0, minf=9 00:40:59.726 IO depths : 1=0.1%, 2=0.7%, 4=69.0%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 issued rwts: total=15464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.726 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:59.726 filename0: (groupid=0, jobs=1): err= 0: pid=1022653: Mon Sep 30 23:09:26 2024 00:40:59.726 read: IOPS=2918, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:40:59.726 slat (nsec): min=5388, max=71405, avg=8303.36, stdev=2894.89 00:40:59.726 clat (usec): min=1573, max=9465, avg=2720.07, stdev=288.83 00:40:59.726 lat (usec): min=1582, max=9471, avg=2728.37, stdev=288.77 00:40:59.726 clat percentiles (usec): 00:40:59.726 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:40:59.726 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:59.726 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 3130], 00:40:59.726 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 6390], 00:40:59.726 | 99.99th=[ 9503] 00:40:59.726 bw ( KiB/s): min=23056, max=23680, per=24.67%, avg=23423.78, stdev=215.26, samples=9 00:40:59.726 iops : min= 2882, max= 2960, avg=2927.89, stdev=27.02, samples=9 00:40:59.726 lat (msec) : 2=0.36%, 4=98.61%, 10=1.03% 00:40:59.726 cpu : usr=96.22%, sys=3.52%, ctx=5, majf=0, minf=9 00:40:59.726 IO depths : 1=0.1%, 2=0.2%, 4=71.0%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 issued rwts: total=14596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.726 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:59.726 filename1: (groupid=0, jobs=1): err= 0: pid=1022654: Mon Sep 30 23:09:26 2024 00:40:59.726 read: IOPS=2936, BW=22.9MiB/s (24.1MB/s)(115MiB/5003msec) 00:40:59.726 slat (nsec): min=7859, max=97097, avg=8712.70, stdev=2664.21 00:40:59.726 clat (usec): min=1150, max=6934, avg=2700.15, stdev=245.14 00:40:59.726 lat (usec): min=1159, max=6944, avg=2708.86, stdev=245.28 00:40:59.726 clat percentiles (usec): 00:40:59.726 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:40:59.726 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:59.726 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3032], 00:40:59.726 | 99.00th=[ 3589], 99.50th=[ 3884], 99.90th=[ 5211], 99.95th=[ 6194], 00:40:59.726 | 99.99th=[ 6915] 00:40:59.726 bw ( KiB/s): min=23150, max=23808, per=24.80%, avg=23548.22, stdev=216.13, samples=9 00:40:59.726 iops : min= 2893, max= 2976, avg=2943.44, stdev=27.19, samples=9 00:40:59.726 lat (msec) : 2=0.39%, 4=99.20%, 10=0.41% 00:40:59.726 cpu : usr=96.28%, sys=3.44%, ctx=8, majf=0, minf=9 00:40:59.726 IO depths : 1=0.1%, 2=0.1%, 4=72.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 issued rwts: total=14692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.726 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:59.726 filename1: (groupid=0, jobs=1): err= 0: pid=1022656: Mon Sep 30 23:09:26 2024 00:40:59.726 read: IOPS=2924, BW=22.8MiB/s (24.0MB/s)(114MiB/5003msec) 00:40:59.726 slat (nsec): min=7862, max=58803, avg=8792.92, stdev=2821.58 00:40:59.726 clat (usec): min=1378, max=6711, avg=2711.14, stdev=247.67 00:40:59.726 lat (usec): min=1386, max=6720, avg=2719.94, stdev=247.75 00:40:59.726 clat percentiles (usec): 00:40:59.726 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2638], 00:40:59.726 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:59.726 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3097], 00:40:59.726 | 99.00th=[ 3621], 99.50th=[ 3949], 99.90th=[ 5014], 99.95th=[ 6128], 00:40:59.726 | 99.99th=[ 6718] 00:40:59.726 bw ( KiB/s): min=23152, max=23632, per=24.70%, avg=23459.56, stdev=162.34, samples=9 00:40:59.726 iops : min= 2894, max= 2954, avg=2932.44, stdev=20.29, samples=9 00:40:59.726 lat (msec) : 2=0.29%, 4=99.26%, 10=0.44% 00:40:59.726 cpu : usr=95.96%, sys=3.78%, ctx=6, majf=0, minf=9 00:40:59.726 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.726 issued rwts: total=14632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.726 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:59.726 00:40:59.726 Run status group 0 (all jobs): 00:40:59.726 READ: bw=92.7MiB/s (97.2MB/s), 22.8MiB/s-24.2MiB/s (23.9MB/s-25.3MB/s), io=464MiB (486MB), run=5002-5003msec 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.726 00:40:59.726 real 0m24.558s 00:40:59.726 user 5m10.867s 00:40:59.726 sys 0m4.676s 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 ************************************ 00:40:59.726 END TEST fio_dif_rand_params 00:40:59.726 ************************************ 00:40:59.726 23:09:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:59.726 23:09:26 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:59.726 23:09:26 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:59.726 23:09:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:59.726 ************************************ 00:40:59.726 START TEST fio_dif_digest 00:40:59.726 ************************************ 00:40:59.726 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:59.727 bdev_null0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:59.727 [2024-09-30 23:09:26.592913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:59.727 { 00:40:59.727 "params": { 00:40:59.727 "name": "Nvme$subsystem", 00:40:59.727 "trtype": "$TEST_TRANSPORT", 00:40:59.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:59.727 "adrfam": "ipv4", 00:40:59.727 "trsvcid": "$NVMF_PORT", 00:40:59.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:59.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:59.727 "hdgst": ${hdgst:-false}, 00:40:59.727 "ddgst": ${ddgst:-false} 00:40:59.727 }, 00:40:59.727 "method": "bdev_nvme_attach_controller" 00:40:59.727 } 00:40:59.727 EOF 00:40:59.727 )") 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:59.727 "params": { 00:40:59.727 "name": "Nvme0", 00:40:59.727 "trtype": "tcp", 00:40:59.727 "traddr": "10.0.0.2", 00:40:59.727 "adrfam": "ipv4", 00:40:59.727 "trsvcid": "4420", 00:40:59.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:59.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:59.727 "hdgst": true, 00:40:59.727 "ddgst": true 00:40:59.727 }, 00:40:59.727 "method": "bdev_nvme_attach_controller" 00:40:59.727 }' 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:59.727 23:09:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.296 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:00.296 ... 00:41:00.296 fio-3.35 00:41:00.296 Starting 3 threads 00:41:12.528 00:41:12.528 filename0: (groupid=0, jobs=1): err= 0: pid=1024044: Mon Sep 30 23:09:37 2024 00:41:12.528 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(403MiB/10044msec) 00:41:12.528 slat (nsec): min=5771, max=34723, avg=6888.41, stdev=1214.98 00:41:12.528 clat (usec): min=6150, max=91955, avg=9330.17, stdev=3103.38 00:41:12.528 lat (usec): min=6156, max=91965, avg=9337.05, stdev=3103.43 00:41:12.528 clat percentiles (usec): 00:41:12.528 | 1.00th=[ 6783], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7898], 00:41:12.528 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:41:12.528 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:41:12.528 | 99.00th=[12125], 99.50th=[12649], 99.90th=[51119], 99.95th=[90702], 00:41:12.528 | 99.99th=[91751] 00:41:12.528 bw ( KiB/s): min=37632, max=44032, per=36.74%, avg=41189.05, stdev=1717.08, samples=19 00:41:12.528 iops : min= 294, max= 344, avg=321.79, stdev=13.41, samples=19 00:41:12.528 lat (msec) : 10=67.66%, 20=32.06%, 50=0.06%, 100=0.22% 00:41:12.528 cpu : usr=93.92%, sys=5.68%, ctx=110, majf=0, minf=200 00:41:12.528 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 issued rwts: total=3222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:12.528 filename0: (groupid=0, jobs=1): err= 0: pid=1024045: Mon Sep 30 23:09:37 2024 00:41:12.528 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(206MiB/10007msec) 00:41:12.528 slat (nsec): min=5665, max=32160, avg=6578.05, stdev=1307.52 00:41:12.528 clat (usec): min=6057, max=93902, avg=18237.99, stdev=17864.30 00:41:12.528 lat (usec): min=6063, max=93908, avg=18244.57, stdev=17864.29 00:41:12.528 clat percentiles (usec): 00:41:12.528 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:41:12.528 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10683], 60.00th=[10945], 00:41:12.528 | 70.00th=[11469], 80.00th=[12387], 90.00th=[51119], 95.00th=[51643], 00:41:12.528 | 99.00th=[91751], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:41:12.528 | 99.99th=[93848] 00:41:12.528 bw ( KiB/s): min=13056, max=29184, per=18.91%, avg=21205.26, stdev=4508.08, samples=19 00:41:12.528 iops : min= 102, max= 228, avg=165.63, stdev=35.22, samples=19 00:41:12.528 lat (msec) : 10=29.06%, 20=53.43%, 50=2.01%, 100=15.50% 00:41:12.528 cpu : usr=95.80%, sys=3.98%, ctx=22, majf=0, minf=66 00:41:12.528 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 issued rwts: total=1645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:12.528 filename0: (groupid=0, jobs=1): err= 0: pid=1024046: Mon Sep 30 23:09:37 2024 00:41:12.528 read: IOPS=391, BW=48.9MiB/s (51.3MB/s)(491MiB/10044msec) 00:41:12.528 slat (nsec): min=5838, max=32631, avg=8436.17, stdev=1400.77 00:41:12.528 clat (usec): min=4458, max=48673, avg=7644.89, stdev=1513.35 00:41:12.528 lat (usec): min=4464, max=48680, avg=7653.32, stdev=1513.51 00:41:12.528 clat percentiles (usec): 00:41:12.528 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6456], 00:41:12.528 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8094], 00:41:12.528 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:41:12.528 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11863], 99.95th=[46924], 00:41:12.528 | 99.99th=[48497] 00:41:12.528 bw ( KiB/s): min=45568, max=56320, per=44.85%, avg=50291.20, stdev=2360.17, samples=20 00:41:12.528 iops : min= 356, max= 440, avg=392.90, stdev=18.44, samples=20 00:41:12.528 lat (msec) : 10=98.73%, 20=1.22%, 50=0.05% 00:41:12.528 cpu : usr=93.85%, sys=5.91%, ctx=18, majf=0, minf=109 00:41:12.528 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.528 issued rwts: total=3931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:12.528 00:41:12.528 Run status group 0 (all jobs): 00:41:12.528 READ: bw=109MiB/s (115MB/s), 20.5MiB/s-48.9MiB/s (21.5MB/s-51.3MB/s), io=1100MiB (1153MB), run=10007-10044msec 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.528 00:41:12.528 real 0m11.068s 00:41:12.528 user 0m42.108s 00:41:12.528 sys 0m1.854s 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:12.528 23:09:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:12.528 ************************************ 00:41:12.528 END TEST fio_dif_digest 00:41:12.528 ************************************ 00:41:12.528 23:09:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:12.528 23:09:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.528 rmmod nvme_tcp 00:41:12.528 rmmod nvme_fabrics 00:41:12.528 rmmod nvme_keyring 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 1013043 ']' 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 1013043 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1013043 ']' 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1013043 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1013043 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1013043' 00:41:12.528 killing process with pid 1013043 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1013043 00:41:12.528 23:09:37 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1013043 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:41:12.528 23:09:37 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:14.442 Waiting for block devices as requested 00:41:14.703 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:14.703 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:14.703 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:14.703 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:14.964 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:14.964 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:14.964 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:15.224 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:15.224 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:15.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:15.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:15.484 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:15.744 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:15.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:15.744 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:16.005 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:16.005 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:16.265 23:09:43 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.265 23:09:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:16.265 23:09:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.806 23:09:45 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.806 00:41:18.806 real 1m19.073s 00:41:18.806 user 7m47.089s 00:41:18.806 sys 0m22.685s 00:41:18.806 23:09:45 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.806 23:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:18.806 ************************************ 00:41:18.806 END TEST nvmf_dif 00:41:18.806 ************************************ 00:41:18.806 23:09:45 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:18.806 23:09:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:18.806 23:09:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:18.806 23:09:45 -- common/autotest_common.sh@10 -- # set +x 00:41:18.806 ************************************ 00:41:18.806 START TEST nvmf_abort_qd_sizes 00:41:18.806 ************************************ 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:18.806 * Looking for test storage... 00:41:18.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:18.806 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:18.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.807 --rc genhtml_branch_coverage=1 00:41:18.807 --rc genhtml_function_coverage=1 00:41:18.807 --rc genhtml_legend=1 00:41:18.807 --rc geninfo_all_blocks=1 00:41:18.807 --rc geninfo_unexecuted_blocks=1 00:41:18.807 00:41:18.807 ' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:18.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.807 --rc genhtml_branch_coverage=1 00:41:18.807 --rc genhtml_function_coverage=1 00:41:18.807 --rc genhtml_legend=1 00:41:18.807 --rc geninfo_all_blocks=1 00:41:18.807 --rc geninfo_unexecuted_blocks=1 00:41:18.807 00:41:18.807 ' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:18.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.807 --rc genhtml_branch_coverage=1 00:41:18.807 --rc genhtml_function_coverage=1 00:41:18.807 --rc genhtml_legend=1 00:41:18.807 --rc geninfo_all_blocks=1 00:41:18.807 --rc geninfo_unexecuted_blocks=1 00:41:18.807 00:41:18.807 ' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:18.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.807 --rc genhtml_branch_coverage=1 00:41:18.807 --rc genhtml_function_coverage=1 00:41:18.807 --rc genhtml_legend=1 00:41:18.807 --rc geninfo_all_blocks=1 00:41:18.807 --rc geninfo_unexecuted_blocks=1 00:41:18.807 00:41:18.807 ' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:18.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:18.807 23:09:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:26.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:26.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:26.947 Found net devices under 0000:31:00.0: cvl_0_0 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:26.947 Found net devices under 0000:31:00.1: cvl_0_1 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:26.947 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:26.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:26.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:41:26.948 00:41:26.948 --- 10.0.0.2 ping statistics --- 00:41:26.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.948 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:26.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:26.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:41:26.948 00:41:26.948 --- 10.0.0.1 ping statistics --- 00:41:26.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.948 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:41:26.948 23:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:29.494 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:29.494 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:29.494 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:29.494 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:29.494 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:29.494 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:29.755 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:30.015 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:30.015 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:30.015 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:30.015 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:30.015 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:30.275 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=1033596 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 1033596 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1033596 ']' 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:30.276 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:30.276 [2024-09-30 23:09:57.149339] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:41:30.276 [2024-09-30 23:09:57.149399] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:30.276 [2024-09-30 23:09:57.233544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:30.536 [2024-09-30 23:09:57.301256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:30.536 [2024-09-30 23:09:57.301297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:30.536 [2024-09-30 23:09:57.301305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:30.536 [2024-09-30 23:09:57.301312] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:30.536 [2024-09-30 23:09:57.301318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:30.536 [2024-09-30 23:09:57.301775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.537 [2024-09-30 23:09:57.301937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:30.537 [2024-09-30 23:09:57.302013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:30.537 [2024-09-30 23:09:57.302200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:31.109 23:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:31.109 ************************************ 00:41:31.109 START TEST spdk_target_abort 00:41:31.109 ************************************ 00:41:31.109 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:41:31.109 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:31.109 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:31.109 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.109 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.370 spdk_targetn1 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.370 [2024-09-30 23:09:58.344492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.370 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.370 [2024-09-30 23:09:58.384816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:31.630 23:09:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:31.630 [2024-09-30 23:09:58.633524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x2000078be000 PRP2 0x0 00:41:31.630 [2024-09-30 23:09:58.633575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:41:31.890 [2024-09-30 23:09:58.649577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:664 len:8 PRP1 0x2000078ca000 PRP2 0x0 00:41:31.890 [2024-09-30 23:09:58.649613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0054 p:1 m:0 dnr:0 00:41:31.890 [2024-09-30 23:09:58.673484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1648 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:41:31.890 [2024-09-30 23:09:58.673518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d1 p:1 m:0 dnr:0 00:41:31.890 [2024-09-30 23:09:58.705377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2904 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:41:31.890 [2024-09-30 23:09:58.705411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:35.185 Initializing NVMe Controllers 00:41:35.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:35.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:35.185 Initialization complete. Launching workers. 00:41:35.185 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16620, failed: 4 00:41:35.185 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1534, failed to submit 15090 00:41:35.185 success 809, unsuccessful 725, failed 0 00:41:35.185 23:10:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:35.185 23:10:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:35.185 [2024-09-30 23:10:01.919177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:41:35.185 [2024-09-30 23:10:01.919217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:41:35.185 [2024-09-30 23:10:01.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:1616 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:41:35.185 [2024-09-30 23:10:01.975085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00cb p:1 m:0 dnr:0 00:41:35.185 [2024-09-30 23:10:01.990996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2024 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:41:35.186 [2024-09-30 23:10:01.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:41:35.186 [2024-09-30 23:10:02.014953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2504 len:8 PRP1 0x200007c44000 PRP2 0x0 00:41:35.186 [2024-09-30 23:10:02.014976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:35.755 [2024-09-30 23:10:02.716121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:18912 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:41:35.755 [2024-09-30 23:10:02.716166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:36.695 [2024-09-30 23:10:03.490817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:37112 len:8 PRP1 0x200007c56000 PRP2 0x0 00:41:36.695 [2024-09-30 23:10:03.490843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0020 p:1 m:0 dnr:0 00:41:38.076 Initializing NVMe Controllers 00:41:38.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:38.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:38.076 Initialization complete. Launching workers. 00:41:38.076 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8614, failed: 6 00:41:38.076 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 7403 00:41:38.076 success 351, unsuccessful 866, failed 0 00:41:38.076 23:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:38.076 23:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:41.369 Initializing NVMe Controllers 00:41:41.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:41.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:41.369 Initialization complete. Launching workers. 00:41:41.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43962, failed: 0 00:41:41.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2677, failed to submit 41285 00:41:41.369 success 584, unsuccessful 2093, failed 0 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.369 23:10:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1033596 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1033596 ']' 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1033596 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033596 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033596' 00:41:43.278 killing process with pid 1033596 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1033596 00:41:43.278 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1033596 00:41:43.539 00:41:43.539 real 0m12.295s 00:41:43.539 user 0m49.911s 00:41:43.539 sys 0m2.021s 00:41:43.539 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:43.539 23:10:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:43.539 ************************************ 00:41:43.539 END TEST spdk_target_abort 00:41:43.539 ************************************ 00:41:43.539 23:10:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:43.539 23:10:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:43.539 23:10:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:43.539 23:10:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.539 ************************************ 00:41:43.539 START TEST kernel_target_abort 00:41:43.539 ************************************ 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:43.540 23:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:46.849 Waiting for block devices as requested 00:41:47.109 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:47.109 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:47.109 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:47.109 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:47.369 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:47.369 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:47.369 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:47.629 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:47.629 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:47.889 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:47.889 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:47.889 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:48.149 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:48.149 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:48.149 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:48.409 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:48.409 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:48.669 No valid GPT data, bailing 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:41:48.669 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:41:48.670 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:41:48.930 00:41:48.930 Discovery Log Number of Records 2, Generation counter 2 00:41:48.930 =====Discovery Log Entry 0====== 00:41:48.930 trtype: tcp 00:41:48.930 adrfam: ipv4 00:41:48.930 subtype: current discovery subsystem 00:41:48.930 treq: not specified, sq flow control disable supported 00:41:48.930 portid: 1 00:41:48.930 trsvcid: 4420 00:41:48.930 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:48.930 traddr: 10.0.0.1 00:41:48.930 eflags: none 00:41:48.930 sectype: none 00:41:48.930 =====Discovery Log Entry 1====== 00:41:48.930 trtype: tcp 00:41:48.930 adrfam: ipv4 00:41:48.930 subtype: nvme subsystem 00:41:48.930 treq: not specified, sq flow control disable supported 00:41:48.930 portid: 1 00:41:48.930 trsvcid: 4420 00:41:48.930 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:48.930 traddr: 10.0.0.1 00:41:48.930 eflags: none 00:41:48.930 sectype: none 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:48.930 23:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:52.232 Initializing NVMe Controllers 00:41:52.232 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:52.232 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:52.232 Initialization complete. Launching workers. 00:41:52.232 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67924, failed: 0 00:41:52.232 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67924, failed to submit 0 00:41:52.232 success 0, unsuccessful 67924, failed 0 00:41:52.232 23:10:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:52.232 23:10:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.660 Initializing NVMe Controllers 00:41:55.660 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:55.660 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:55.660 Initialization complete. Launching workers. 00:41:55.660 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119341, failed: 0 00:41:55.660 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30018, failed to submit 89323 00:41:55.660 success 0, unsuccessful 30018, failed 0 00:41:55.660 23:10:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:55.660 23:10:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.202 Initializing NVMe Controllers 00:41:58.202 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:58.202 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:58.202 Initialization complete. Launching workers. 00:41:58.202 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146655, failed: 0 00:41:58.202 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36710, failed to submit 109945 00:41:58.202 success 0, unsuccessful 36710, failed 0 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:41:58.202 23:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:02.404 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:02.404 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:03.786 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:04.047 00:42:04.047 real 0m20.497s 00:42:04.047 user 0m9.896s 00:42:04.047 sys 0m6.251s 00:42:04.047 23:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.047 23:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:04.047 ************************************ 00:42:04.047 END TEST kernel_target_abort 00:42:04.047 ************************************ 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:04.047 23:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:04.047 rmmod nvme_tcp 00:42:04.047 rmmod nvme_fabrics 00:42:04.047 rmmod nvme_keyring 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 1033596 ']' 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 1033596 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1033596 ']' 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1033596 00:42:04.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1033596) - No such process 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1033596 is not found' 00:42:04.047 Process with pid 1033596 is not found 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:04.047 23:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:08.247 Waiting for block devices as requested 00:42:08.247 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:08.247 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:08.507 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:08.507 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:08.507 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:08.766 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:08.766 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:08.766 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:08.766 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:09.025 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:09.285 23:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.824 23:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:11.824 00:42:11.824 real 0m52.927s 00:42:11.824 user 1m5.264s 00:42:11.824 sys 0m19.554s 00:42:11.824 23:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:11.824 23:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:11.824 ************************************ 00:42:11.824 END TEST nvmf_abort_qd_sizes 00:42:11.824 ************************************ 00:42:11.824 23:10:38 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:11.824 23:10:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:11.824 23:10:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:11.824 23:10:38 -- common/autotest_common.sh@10 -- # set +x 00:42:11.824 ************************************ 00:42:11.824 START TEST keyring_file 00:42:11.824 ************************************ 00:42:11.824 23:10:38 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:11.824 * Looking for test storage... 00:42:11.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:11.824 23:10:38 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:11.824 23:10:38 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:42:11.824 23:10:38 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:11.824 23:10:38 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:11.824 23:10:38 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.825 --rc genhtml_branch_coverage=1 00:42:11.825 --rc genhtml_function_coverage=1 00:42:11.825 --rc genhtml_legend=1 00:42:11.825 --rc geninfo_all_blocks=1 00:42:11.825 --rc geninfo_unexecuted_blocks=1 00:42:11.825 00:42:11.825 ' 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.825 --rc genhtml_branch_coverage=1 00:42:11.825 --rc genhtml_function_coverage=1 00:42:11.825 --rc genhtml_legend=1 00:42:11.825 --rc geninfo_all_blocks=1 00:42:11.825 --rc geninfo_unexecuted_blocks=1 00:42:11.825 00:42:11.825 ' 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.825 --rc genhtml_branch_coverage=1 00:42:11.825 --rc genhtml_function_coverage=1 00:42:11.825 --rc genhtml_legend=1 00:42:11.825 --rc geninfo_all_blocks=1 00:42:11.825 --rc geninfo_unexecuted_blocks=1 00:42:11.825 00:42:11.825 ' 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.825 --rc genhtml_branch_coverage=1 00:42:11.825 --rc genhtml_function_coverage=1 00:42:11.825 --rc genhtml_legend=1 00:42:11.825 --rc geninfo_all_blocks=1 00:42:11.825 --rc geninfo_unexecuted_blocks=1 00:42:11.825 00:42:11.825 ' 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:11.825 23:10:38 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:11.825 23:10:38 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.825 23:10:38 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.825 23:10:38 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.825 23:10:38 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:11.825 23:10:38 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:11.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aEGzNouzjp 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aEGzNouzjp 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aEGzNouzjp 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.aEGzNouzjp 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0YK0wUhVQu 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:11.825 23:10:38 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0YK0wUhVQu 00:42:11.825 23:10:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0YK0wUhVQu 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0YK0wUhVQu 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@30 -- # tgtpid=1044203 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1044203 00:42:11.825 23:10:38 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1044203 ']' 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:11.825 23:10:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:11.825 [2024-09-30 23:10:38.738346] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:42:11.826 [2024-09-30 23:10:38.738402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044203 ] 00:42:11.826 [2024-09-30 23:10:38.815385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.086 [2024-09-30 23:10:38.881687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.656 23:10:39 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:12.656 23:10:39 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:12.656 23:10:39 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:12.656 23:10:39 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.656 23:10:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:12.656 [2024-09-30 23:10:39.524941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:12.656 null0 00:42:12.656 [2024-09-30 23:10:39.556983] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:12.656 [2024-09-30 23:10:39.557214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:12.656 23:10:39 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.657 23:10:39 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:12.657 [2024-09-30 23:10:39.589045] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:12.657 request: 00:42:12.657 { 00:42:12.657 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:12.657 "secure_channel": false, 00:42:12.657 "listen_address": { 00:42:12.657 "trtype": "tcp", 00:42:12.657 "traddr": "127.0.0.1", 00:42:12.657 "trsvcid": "4420" 00:42:12.657 }, 00:42:12.657 "method": "nvmf_subsystem_add_listener", 00:42:12.657 "req_id": 1 00:42:12.657 } 00:42:12.657 Got JSON-RPC error response 00:42:12.657 response: 00:42:12.657 { 00:42:12.657 "code": -32602, 00:42:12.657 "message": "Invalid parameters" 00:42:12.657 } 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:12.657 23:10:39 keyring_file -- keyring/file.sh@47 -- # bperfpid=1044230 00:42:12.657 23:10:39 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1044230 /var/tmp/bperf.sock 00:42:12.657 23:10:39 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1044230 ']' 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:12.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:12.657 23:10:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:12.657 [2024-09-30 23:10:39.648789] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:42:12.657 [2024-09-30 23:10:39.648840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044230 ] 00:42:12.917 [2024-09-30 23:10:39.724157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.917 [2024-09-30 23:10:39.789764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.489 23:10:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:13.489 23:10:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:13.489 23:10:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:13.489 23:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:13.750 23:10:40 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0YK0wUhVQu 00:42:13.750 23:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0YK0wUhVQu 00:42:14.011 23:10:40 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:14.011 23:10:40 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:14.011 23:10:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.011 23:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.011 23:10:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:14.271 23:10:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.aEGzNouzjp == \/\t\m\p\/\t\m\p\.\a\E\G\z\N\o\u\z\j\p ]] 00:42:14.271 23:10:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:14.271 23:10:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.271 23:10:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.0YK0wUhVQu == \/\t\m\p\/\t\m\p\.\0\Y\K\0\w\U\h\V\Q\u ]] 00:42:14.271 23:10:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:14.271 23:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.531 23:10:41 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:14.531 23:10:41 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:14.531 23:10:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:14.531 23:10:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:14.531 23:10:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.531 23:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.531 23:10:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:14.791 23:10:41 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:14.791 23:10:41 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:14.791 23:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:14.791 [2024-09-30 23:10:41.720555] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:14.791 nvme0n1 00:42:15.051 23:10:41 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:15.051 23:10:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:15.052 23:10:41 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:15.052 23:10:41 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:15.052 23:10:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:15.052 23:10:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:15.052 23:10:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:15.052 23:10:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:15.312 23:10:42 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:15.312 23:10:42 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:15.312 Running I/O for 1 seconds... 00:42:16.254 20063.00 IOPS, 78.37 MiB/s 00:42:16.254 Latency(us) 00:42:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:16.254 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:16.254 nvme0n1 : 1.00 20117.40 78.58 0.00 0.00 6351.94 2293.76 12724.91 00:42:16.254 =================================================================================================================== 00:42:16.255 Total : 20117.40 78.58 0.00 0.00 6351.94 2293.76 12724.91 00:42:16.255 { 00:42:16.255 "results": [ 00:42:16.255 { 00:42:16.255 "job": "nvme0n1", 00:42:16.255 "core_mask": "0x2", 00:42:16.255 "workload": "randrw", 00:42:16.255 "percentage": 50, 00:42:16.255 "status": "finished", 00:42:16.255 "queue_depth": 128, 00:42:16.255 "io_size": 4096, 00:42:16.255 "runtime": 1.003708, 00:42:16.255 "iops": 20117.404663507714, 00:42:16.255 "mibps": 78.58361196682701, 00:42:16.255 "io_failed": 0, 00:42:16.255 "io_timeout": 0, 00:42:16.255 "avg_latency_us": 6351.940496566297, 00:42:16.255 "min_latency_us": 2293.76, 00:42:16.255 "max_latency_us": 12724.906666666666 00:42:16.255 } 00:42:16.255 ], 00:42:16.255 "core_count": 1 00:42:16.255 } 00:42:16.515 23:10:43 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:16.515 23:10:43 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.515 23:10:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.777 23:10:43 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:16.777 23:10:43 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:16.777 23:10:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:16.777 23:10:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.777 23:10:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.777 23:10:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:16.777 23:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.038 23:10:43 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:17.038 23:10:43 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:17.038 23:10:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:17.039 23:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:17.039 [2024-09-30 23:10:43.989576] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:17.039 [2024-09-30 23:10:43.990221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecec20 (107): Transport endpoint is not connected 00:42:17.039 [2024-09-30 23:10:43.991218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecec20 (9): Bad file descriptor 00:42:17.039 [2024-09-30 23:10:43.992219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:17.039 [2024-09-30 23:10:43.992228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:17.039 [2024-09-30 23:10:43.992234] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:17.039 [2024-09-30 23:10:43.992241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:17.039 request: 00:42:17.039 { 00:42:17.039 "name": "nvme0", 00:42:17.039 "trtype": "tcp", 00:42:17.039 "traddr": "127.0.0.1", 00:42:17.039 "adrfam": "ipv4", 00:42:17.039 "trsvcid": "4420", 00:42:17.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:17.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:17.039 "prchk_reftag": false, 00:42:17.039 "prchk_guard": false, 00:42:17.039 "hdgst": false, 00:42:17.039 "ddgst": false, 00:42:17.039 "psk": "key1", 00:42:17.039 "allow_unrecognized_csi": false, 00:42:17.039 "method": "bdev_nvme_attach_controller", 00:42:17.039 "req_id": 1 00:42:17.039 } 00:42:17.039 Got JSON-RPC error response 00:42:17.039 response: 00:42:17.039 { 00:42:17.039 "code": -5, 00:42:17.039 "message": "Input/output error" 00:42:17.039 } 00:42:17.039 23:10:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:17.039 23:10:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:17.039 23:10:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:17.039 23:10:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:17.039 23:10:44 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:17.039 23:10:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:17.039 23:10:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.039 23:10:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.039 23:10:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:17.039 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.300 23:10:44 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:17.300 23:10:44 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:17.300 23:10:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:17.300 23:10:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.300 23:10:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.300 23:10:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:17.300 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.561 23:10:44 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:17.561 23:10:44 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:17.561 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:17.561 23:10:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:17.561 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:17.822 23:10:44 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:17.822 23:10:44 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:17.822 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.083 23:10:44 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:18.083 23:10:44 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:44 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:18.083 23:10:44 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.083 [2024-09-30 23:10:45.037930] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aEGzNouzjp': 0100660 00:42:18.083 [2024-09-30 23:10:45.037954] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:18.083 request: 00:42:18.083 { 00:42:18.083 "name": "key0", 00:42:18.083 "path": "/tmp/tmp.aEGzNouzjp", 00:42:18.083 "method": "keyring_file_add_key", 00:42:18.083 "req_id": 1 00:42:18.083 } 00:42:18.083 Got JSON-RPC error response 00:42:18.083 response: 00:42:18.083 { 00:42:18.083 "code": -1, 00:42:18.083 "message": "Operation not permitted" 00:42:18.083 } 00:42:18.083 23:10:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:18.083 23:10:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:18.083 23:10:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:18.083 23:10:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:18.083 23:10:45 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:45 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.083 23:10:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aEGzNouzjp 00:42:18.344 23:10:45 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.aEGzNouzjp 00:42:18.344 23:10:45 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:18.344 23:10:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:18.344 23:10:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.344 23:10:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.345 23:10:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.345 23:10:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:18.606 23:10:45 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:18.606 23:10:45 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:18.606 23:10:45 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:18.606 23:10:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:18.606 [2024-09-30 23:10:45.611373] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.aEGzNouzjp': No such file or directory 00:42:18.606 [2024-09-30 23:10:45.611387] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:18.606 [2024-09-30 23:10:45.611400] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:18.606 [2024-09-30 23:10:45.611406] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:18.606 [2024-09-30 23:10:45.611412] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:18.606 [2024-09-30 23:10:45.611417] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:18.606 request: 00:42:18.606 { 00:42:18.606 "name": "nvme0", 00:42:18.606 "trtype": "tcp", 00:42:18.606 "traddr": "127.0.0.1", 00:42:18.606 "adrfam": "ipv4", 00:42:18.606 "trsvcid": "4420", 00:42:18.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.606 "prchk_reftag": false, 00:42:18.606 "prchk_guard": false, 00:42:18.606 "hdgst": false, 00:42:18.606 "ddgst": false, 00:42:18.606 "psk": "key0", 00:42:18.606 "allow_unrecognized_csi": false, 00:42:18.606 "method": "bdev_nvme_attach_controller", 00:42:18.606 "req_id": 1 00:42:18.606 } 00:42:18.606 Got JSON-RPC error response 00:42:18.606 response: 00:42:18.606 { 00:42:18.606 "code": -19, 00:42:18.606 "message": "No such device" 00:42:18.606 } 00:42:18.868 23:10:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:18.868 23:10:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:18.868 23:10:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:18.868 23:10:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:18.868 23:10:45 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:18.868 23:10:45 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aYNF07fti1 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:18.868 23:10:45 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aYNF07fti1 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aYNF07fti1 00:42:18.868 23:10:45 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.aYNF07fti1 00:42:18.868 23:10:45 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aYNF07fti1 00:42:18.868 23:10:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aYNF07fti1 00:42:19.129 23:10:46 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:19.129 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:19.390 nvme0n1 00:42:19.390 23:10:46 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:19.390 23:10:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.390 23:10:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.390 23:10:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.390 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.390 23:10:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.651 23:10:46 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:19.651 23:10:46 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:19.651 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:19.651 23:10:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:19.651 23:10:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:19.651 23:10:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.651 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.651 23:10:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.912 23:10:46 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:19.912 23:10:46 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:19.912 23:10:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.912 23:10:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.912 23:10:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.912 23:10:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.912 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.173 23:10:46 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:20.173 23:10:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:20.173 23:10:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:20.173 23:10:47 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:20.173 23:10:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.173 23:10:47 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:20.435 23:10:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:20.435 23:10:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aYNF07fti1 00:42:20.435 23:10:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aYNF07fti1 00:42:20.696 23:10:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0YK0wUhVQu 00:42:20.696 23:10:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0YK0wUhVQu 00:42:20.696 23:10:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.696 23:10:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.957 nvme0n1 00:42:20.957 23:10:47 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:20.957 23:10:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:21.217 23:10:48 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:21.217 "subsystems": [ 00:42:21.217 { 00:42:21.217 "subsystem": "keyring", 00:42:21.217 "config": [ 00:42:21.217 { 00:42:21.217 "method": "keyring_file_add_key", 00:42:21.217 "params": { 00:42:21.217 "name": "key0", 00:42:21.217 "path": "/tmp/tmp.aYNF07fti1" 00:42:21.217 } 00:42:21.217 }, 00:42:21.217 { 00:42:21.218 "method": "keyring_file_add_key", 00:42:21.218 "params": { 00:42:21.218 "name": "key1", 00:42:21.218 "path": "/tmp/tmp.0YK0wUhVQu" 00:42:21.218 } 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "iobuf", 00:42:21.218 "config": [ 00:42:21.218 { 00:42:21.218 "method": "iobuf_set_options", 00:42:21.218 "params": { 00:42:21.218 "small_pool_count": 8192, 00:42:21.218 "large_pool_count": 1024, 00:42:21.218 "small_bufsize": 8192, 00:42:21.218 "large_bufsize": 135168 00:42:21.218 } 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "sock", 00:42:21.218 "config": [ 00:42:21.218 { 00:42:21.218 "method": "sock_set_default_impl", 00:42:21.218 "params": { 00:42:21.218 "impl_name": "posix" 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "sock_impl_set_options", 00:42:21.218 "params": { 00:42:21.218 "impl_name": "ssl", 00:42:21.218 "recv_buf_size": 4096, 00:42:21.218 "send_buf_size": 4096, 00:42:21.218 "enable_recv_pipe": true, 00:42:21.218 "enable_quickack": false, 00:42:21.218 "enable_placement_id": 0, 00:42:21.218 "enable_zerocopy_send_server": true, 00:42:21.218 "enable_zerocopy_send_client": false, 00:42:21.218 "zerocopy_threshold": 0, 00:42:21.218 "tls_version": 0, 00:42:21.218 "enable_ktls": false 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "sock_impl_set_options", 00:42:21.218 "params": { 00:42:21.218 "impl_name": "posix", 00:42:21.218 "recv_buf_size": 2097152, 00:42:21.218 "send_buf_size": 2097152, 00:42:21.218 "enable_recv_pipe": true, 00:42:21.218 "enable_quickack": false, 00:42:21.218 "enable_placement_id": 0, 00:42:21.218 "enable_zerocopy_send_server": true, 00:42:21.218 "enable_zerocopy_send_client": false, 00:42:21.218 "zerocopy_threshold": 0, 00:42:21.218 "tls_version": 0, 00:42:21.218 "enable_ktls": false 00:42:21.218 } 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "vmd", 00:42:21.218 "config": [] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "accel", 00:42:21.218 "config": [ 00:42:21.218 { 00:42:21.218 "method": "accel_set_options", 00:42:21.218 "params": { 00:42:21.218 "small_cache_size": 128, 00:42:21.218 "large_cache_size": 16, 00:42:21.218 "task_count": 2048, 00:42:21.218 "sequence_count": 2048, 00:42:21.218 "buf_count": 2048 00:42:21.218 } 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "bdev", 00:42:21.218 "config": [ 00:42:21.218 { 00:42:21.218 "method": "bdev_set_options", 00:42:21.218 "params": { 00:42:21.218 "bdev_io_pool_size": 65535, 00:42:21.218 "bdev_io_cache_size": 256, 00:42:21.218 "bdev_auto_examine": true, 00:42:21.218 "iobuf_small_cache_size": 128, 00:42:21.218 "iobuf_large_cache_size": 16 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_raid_set_options", 00:42:21.218 "params": { 00:42:21.218 "process_window_size_kb": 1024, 00:42:21.218 "process_max_bandwidth_mb_sec": 0 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_iscsi_set_options", 00:42:21.218 "params": { 00:42:21.218 "timeout_sec": 30 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_nvme_set_options", 00:42:21.218 "params": { 00:42:21.218 "action_on_timeout": "none", 00:42:21.218 "timeout_us": 0, 00:42:21.218 "timeout_admin_us": 0, 00:42:21.218 "keep_alive_timeout_ms": 10000, 00:42:21.218 "arbitration_burst": 0, 00:42:21.218 "low_priority_weight": 0, 00:42:21.218 "medium_priority_weight": 0, 00:42:21.218 "high_priority_weight": 0, 00:42:21.218 "nvme_adminq_poll_period_us": 10000, 00:42:21.218 "nvme_ioq_poll_period_us": 0, 00:42:21.218 "io_queue_requests": 512, 00:42:21.218 "delay_cmd_submit": true, 00:42:21.218 "transport_retry_count": 4, 00:42:21.218 "bdev_retry_count": 3, 00:42:21.218 "transport_ack_timeout": 0, 00:42:21.218 "ctrlr_loss_timeout_sec": 0, 00:42:21.218 "reconnect_delay_sec": 0, 00:42:21.218 "fast_io_fail_timeout_sec": 0, 00:42:21.218 "disable_auto_failback": false, 00:42:21.218 "generate_uuids": false, 00:42:21.218 "transport_tos": 0, 00:42:21.218 "nvme_error_stat": false, 00:42:21.218 "rdma_srq_size": 0, 00:42:21.218 "io_path_stat": false, 00:42:21.218 "allow_accel_sequence": false, 00:42:21.218 "rdma_max_cq_size": 0, 00:42:21.218 "rdma_cm_event_timeout_ms": 0, 00:42:21.218 "dhchap_digests": [ 00:42:21.218 "sha256", 00:42:21.218 "sha384", 00:42:21.218 "sha512" 00:42:21.218 ], 00:42:21.218 "dhchap_dhgroups": [ 00:42:21.218 "null", 00:42:21.218 "ffdhe2048", 00:42:21.218 "ffdhe3072", 00:42:21.218 "ffdhe4096", 00:42:21.218 "ffdhe6144", 00:42:21.218 "ffdhe8192" 00:42:21.218 ] 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_nvme_attach_controller", 00:42:21.218 "params": { 00:42:21.218 "name": "nvme0", 00:42:21.218 "trtype": "TCP", 00:42:21.218 "adrfam": "IPv4", 00:42:21.218 "traddr": "127.0.0.1", 00:42:21.218 "trsvcid": "4420", 00:42:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.218 "prchk_reftag": false, 00:42:21.218 "prchk_guard": false, 00:42:21.218 "ctrlr_loss_timeout_sec": 0, 00:42:21.218 "reconnect_delay_sec": 0, 00:42:21.218 "fast_io_fail_timeout_sec": 0, 00:42:21.218 "psk": "key0", 00:42:21.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.218 "hdgst": false, 00:42:21.218 "ddgst": false 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_nvme_set_hotplug", 00:42:21.218 "params": { 00:42:21.218 "period_us": 100000, 00:42:21.218 "enable": false 00:42:21.218 } 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "method": "bdev_wait_for_examine" 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }, 00:42:21.218 { 00:42:21.218 "subsystem": "nbd", 00:42:21.218 "config": [] 00:42:21.218 } 00:42:21.218 ] 00:42:21.218 }' 00:42:21.218 23:10:48 keyring_file -- keyring/file.sh@115 -- # killprocess 1044230 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1044230 ']' 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1044230 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044230 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044230' 00:42:21.218 killing process with pid 1044230 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@969 -- # kill 1044230 00:42:21.218 Received shutdown signal, test time was about 1.000000 seconds 00:42:21.218 00:42:21.218 Latency(us) 00:42:21.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.218 =================================================================================================================== 00:42:21.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:21.218 23:10:48 keyring_file -- common/autotest_common.sh@974 -- # wait 1044230 00:42:21.480 23:10:48 keyring_file -- keyring/file.sh@118 -- # bperfpid=1046037 00:42:21.480 23:10:48 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1046037 /var/tmp/bperf.sock 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1046037 ']' 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:21.480 23:10:48 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:21.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:21.480 23:10:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:21.480 23:10:48 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:21.480 "subsystems": [ 00:42:21.480 { 00:42:21.480 "subsystem": "keyring", 00:42:21.480 "config": [ 00:42:21.480 { 00:42:21.480 "method": "keyring_file_add_key", 00:42:21.480 "params": { 00:42:21.480 "name": "key0", 00:42:21.480 "path": "/tmp/tmp.aYNF07fti1" 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "keyring_file_add_key", 00:42:21.480 "params": { 00:42:21.480 "name": "key1", 00:42:21.480 "path": "/tmp/tmp.0YK0wUhVQu" 00:42:21.480 } 00:42:21.480 } 00:42:21.480 ] 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "subsystem": "iobuf", 00:42:21.480 "config": [ 00:42:21.480 { 00:42:21.480 "method": "iobuf_set_options", 00:42:21.480 "params": { 00:42:21.480 "small_pool_count": 8192, 00:42:21.480 "large_pool_count": 1024, 00:42:21.480 "small_bufsize": 8192, 00:42:21.480 "large_bufsize": 135168 00:42:21.480 } 00:42:21.480 } 00:42:21.480 ] 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "subsystem": "sock", 00:42:21.480 "config": [ 00:42:21.480 { 00:42:21.480 "method": "sock_set_default_impl", 00:42:21.480 "params": { 00:42:21.480 "impl_name": "posix" 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "sock_impl_set_options", 00:42:21.480 "params": { 00:42:21.480 "impl_name": "ssl", 00:42:21.480 "recv_buf_size": 4096, 00:42:21.480 "send_buf_size": 4096, 00:42:21.480 "enable_recv_pipe": true, 00:42:21.480 "enable_quickack": false, 00:42:21.480 "enable_placement_id": 0, 00:42:21.480 "enable_zerocopy_send_server": true, 00:42:21.480 "enable_zerocopy_send_client": false, 00:42:21.480 "zerocopy_threshold": 0, 00:42:21.480 "tls_version": 0, 00:42:21.480 "enable_ktls": false 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "sock_impl_set_options", 00:42:21.480 "params": { 00:42:21.480 "impl_name": "posix", 00:42:21.480 "recv_buf_size": 2097152, 00:42:21.480 "send_buf_size": 2097152, 00:42:21.480 "enable_recv_pipe": true, 00:42:21.480 "enable_quickack": false, 00:42:21.480 "enable_placement_id": 0, 00:42:21.480 "enable_zerocopy_send_server": true, 00:42:21.480 "enable_zerocopy_send_client": false, 00:42:21.480 "zerocopy_threshold": 0, 00:42:21.480 "tls_version": 0, 00:42:21.480 "enable_ktls": false 00:42:21.480 } 00:42:21.480 } 00:42:21.480 ] 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "subsystem": "vmd", 00:42:21.480 "config": [] 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "subsystem": "accel", 00:42:21.480 "config": [ 00:42:21.480 { 00:42:21.480 "method": "accel_set_options", 00:42:21.480 "params": { 00:42:21.480 "small_cache_size": 128, 00:42:21.480 "large_cache_size": 16, 00:42:21.480 "task_count": 2048, 00:42:21.480 "sequence_count": 2048, 00:42:21.480 "buf_count": 2048 00:42:21.480 } 00:42:21.480 } 00:42:21.480 ] 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "subsystem": "bdev", 00:42:21.480 "config": [ 00:42:21.480 { 00:42:21.480 "method": "bdev_set_options", 00:42:21.480 "params": { 00:42:21.480 "bdev_io_pool_size": 65535, 00:42:21.480 "bdev_io_cache_size": 256, 00:42:21.480 "bdev_auto_examine": true, 00:42:21.480 "iobuf_small_cache_size": 128, 00:42:21.480 "iobuf_large_cache_size": 16 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "bdev_raid_set_options", 00:42:21.480 "params": { 00:42:21.480 "process_window_size_kb": 1024, 00:42:21.480 "process_max_bandwidth_mb_sec": 0 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "bdev_iscsi_set_options", 00:42:21.480 "params": { 00:42:21.480 "timeout_sec": 30 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "bdev_nvme_set_options", 00:42:21.480 "params": { 00:42:21.480 "action_on_timeout": "none", 00:42:21.480 "timeout_us": 0, 00:42:21.480 "timeout_admin_us": 0, 00:42:21.480 "keep_alive_timeout_ms": 10000, 00:42:21.480 "arbitration_burst": 0, 00:42:21.480 "low_priority_weight": 0, 00:42:21.480 "medium_priority_weight": 0, 00:42:21.480 "high_priority_weight": 0, 00:42:21.480 "nvme_adminq_poll_period_us": 10000, 00:42:21.480 "nvme_ioq_poll_period_us": 0, 00:42:21.480 "io_queue_requests": 512, 00:42:21.480 "delay_cmd_submit": true, 00:42:21.480 "transport_retry_count": 4, 00:42:21.480 "bdev_retry_count": 3, 00:42:21.480 "transport_ack_timeout": 0, 00:42:21.480 "ctrlr_loss_timeout_sec": 0, 00:42:21.480 "reconnect_delay_sec": 0, 00:42:21.480 "fast_io_fail_timeout_sec": 0, 00:42:21.480 "disable_auto_failback": false, 00:42:21.480 "generate_uuids": false, 00:42:21.480 "transport_tos": 0, 00:42:21.480 "nvme_error_stat": false, 00:42:21.480 "rdma_srq_size": 0, 00:42:21.480 "io_path_stat": false, 00:42:21.480 "allow_accel_sequence": false, 00:42:21.480 "rdma_max_cq_size": 0, 00:42:21.480 "rdma_cm_event_timeout_ms": 0, 00:42:21.480 "dhchap_digests": [ 00:42:21.480 "sha256", 00:42:21.480 "sha384", 00:42:21.480 "sha512" 00:42:21.480 ], 00:42:21.480 "dhchap_dhgroups": [ 00:42:21.480 "null", 00:42:21.480 "ffdhe2048", 00:42:21.480 "ffdhe3072", 00:42:21.480 "ffdhe4096", 00:42:21.480 "ffdhe6144", 00:42:21.480 "ffdhe8192" 00:42:21.480 ] 00:42:21.480 } 00:42:21.480 }, 00:42:21.480 { 00:42:21.480 "method": "bdev_nvme_attach_controller", 00:42:21.480 "params": { 00:42:21.480 "name": "nvme0", 00:42:21.480 "trtype": "TCP", 00:42:21.480 "adrfam": "IPv4", 00:42:21.480 "traddr": "127.0.0.1", 00:42:21.480 "trsvcid": "4420", 00:42:21.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.480 "prchk_reftag": false, 00:42:21.481 "prchk_guard": false, 00:42:21.481 "ctrlr_loss_timeout_sec": 0, 00:42:21.481 "reconnect_delay_sec": 0, 00:42:21.481 "fast_io_fail_timeout_sec": 0, 00:42:21.481 "psk": "key0", 00:42:21.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.481 "hdgst": false, 00:42:21.481 "ddgst": false 00:42:21.481 } 00:42:21.481 }, 00:42:21.481 { 00:42:21.481 "method": "bdev_nvme_set_hotplug", 00:42:21.481 "params": { 00:42:21.481 "period_us": 100000, 00:42:21.481 "enable": false 00:42:21.481 } 00:42:21.481 }, 00:42:21.481 { 00:42:21.481 "method": "bdev_wait_for_examine" 00:42:21.481 } 00:42:21.481 ] 00:42:21.481 }, 00:42:21.481 { 00:42:21.481 "subsystem": "nbd", 00:42:21.481 "config": [] 00:42:21.481 } 00:42:21.481 ] 00:42:21.481 }' 00:42:21.481 [2024-09-30 23:10:48.329557] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:42:21.481 [2024-09-30 23:10:48.329611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046037 ] 00:42:21.481 [2024-09-30 23:10:48.406264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.481 [2024-09-30 23:10:48.459181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.742 [2024-09-30 23:10:48.602068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:22.312 23:10:49 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:22.312 23:10:49 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:22.312 23:10:49 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:22.312 23:10:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.312 23:10:49 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:22.312 23:10:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:22.312 23:10:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:22.312 23:10:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:22.312 23:10:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:22.572 23:10:49 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:22.572 23:10:49 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:22.572 23:10:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.832 23:10:49 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:22.832 23:10:49 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:22.832 23:10:49 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:22.832 23:10:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:23.093 23:10:49 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:23.093 23:10:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:23.093 23:10:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aYNF07fti1 /tmp/tmp.0YK0wUhVQu 00:42:23.093 23:10:49 keyring_file -- keyring/file.sh@20 -- # killprocess 1046037 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1046037 ']' 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1046037 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046037 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046037' 00:42:23.093 killing process with pid 1046037 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@969 -- # kill 1046037 00:42:23.093 Received shutdown signal, test time was about 1.000000 seconds 00:42:23.093 00:42:23.093 Latency(us) 00:42:23.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.093 =================================================================================================================== 00:42:23.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:23.093 23:10:49 keyring_file -- common/autotest_common.sh@974 -- # wait 1046037 00:42:23.093 23:10:50 keyring_file -- keyring/file.sh@21 -- # killprocess 1044203 00:42:23.093 23:10:50 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1044203 ']' 00:42:23.093 23:10:50 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1044203 00:42:23.093 23:10:50 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:23.093 23:10:50 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:23.093 23:10:50 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044203 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044203' 00:42:23.354 killing process with pid 1044203 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@969 -- # kill 1044203 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@974 -- # wait 1044203 00:42:23.354 00:42:23.354 real 0m11.999s 00:42:23.354 user 0m29.028s 00:42:23.354 sys 0m2.645s 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:23.354 23:10:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:23.354 ************************************ 00:42:23.354 END TEST keyring_file 00:42:23.354 ************************************ 00:42:23.354 23:10:50 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:23.354 23:10:50 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:23.354 23:10:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:23.354 23:10:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:23.354 23:10:50 -- common/autotest_common.sh@10 -- # set +x 00:42:23.616 ************************************ 00:42:23.616 START TEST keyring_linux 00:42:23.616 ************************************ 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:23.616 Joined session keyring: 1066376402 00:42:23.616 * Looking for test storage... 00:42:23.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.616 23:10:50 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:23.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.616 --rc genhtml_branch_coverage=1 00:42:23.616 --rc genhtml_function_coverage=1 00:42:23.616 --rc genhtml_legend=1 00:42:23.616 --rc geninfo_all_blocks=1 00:42:23.616 --rc geninfo_unexecuted_blocks=1 00:42:23.616 00:42:23.616 ' 00:42:23.616 23:10:50 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:23.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.616 --rc genhtml_branch_coverage=1 00:42:23.616 --rc genhtml_function_coverage=1 00:42:23.617 --rc genhtml_legend=1 00:42:23.617 --rc geninfo_all_blocks=1 00:42:23.617 --rc geninfo_unexecuted_blocks=1 00:42:23.617 00:42:23.617 ' 00:42:23.617 23:10:50 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:23.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.617 --rc genhtml_branch_coverage=1 00:42:23.617 --rc genhtml_function_coverage=1 00:42:23.617 --rc genhtml_legend=1 00:42:23.617 --rc geninfo_all_blocks=1 00:42:23.617 --rc geninfo_unexecuted_blocks=1 00:42:23.617 00:42:23.617 ' 00:42:23.617 23:10:50 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:23.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.617 --rc genhtml_branch_coverage=1 00:42:23.617 --rc genhtml_function_coverage=1 00:42:23.617 --rc genhtml_legend=1 00:42:23.617 --rc geninfo_all_blocks=1 00:42:23.617 --rc geninfo_unexecuted_blocks=1 00:42:23.617 00:42:23.617 ' 00:42:23.617 23:10:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:23.617 23:10:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.617 23:10:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.879 23:10:50 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.879 23:10:50 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.879 23:10:50 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.879 23:10:50 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.879 23:10:50 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.879 23:10:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.879 23:10:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.880 23:10:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.880 23:10:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:23.880 23:10:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:23.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@729 -- # python - 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:23.880 /tmp/:spdk-test:key0 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:42:23.880 23:10:50 keyring_linux -- nvmf/common.sh@729 -- # python - 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:23.880 23:10:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:23.880 /tmp/:spdk-test:key1 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1046527 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1046527 00:42:23.880 23:10:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1046527 ']' 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:23.880 23:10:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:23.880 [2024-09-30 23:10:50.804303] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:42:23.880 [2024-09-30 23:10:50.804358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046527 ] 00:42:23.880 [2024-09-30 23:10:50.882762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.140 [2024-09-30 23:10:50.936979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:24.713 [2024-09-30 23:10:51.603623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:24.713 null0 00:42:24.713 [2024-09-30 23:10:51.635672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:24.713 [2024-09-30 23:10:51.636058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:24.713 703267137 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:24.713 809217006 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1046817 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1046817 /var/tmp/bperf.sock 00:42:24.713 23:10:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1046817 ']' 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:24.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:24.713 23:10:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:24.713 [2024-09-30 23:10:51.710685] Starting SPDK v25.01-pre git sha1 310cb0643 / DPDK 24.03.0 initialization... 00:42:24.713 [2024-09-30 23:10:51.710734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046817 ] 00:42:24.973 [2024-09-30 23:10:51.786807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.973 [2024-09-30 23:10:51.840107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.543 23:10:52 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:25.544 23:10:52 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:25.544 23:10:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:25.544 23:10:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:25.804 23:10:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:25.804 23:10:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:26.065 23:10:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:26.065 23:10:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:26.065 [2024-09-30 23:10:53.044728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:26.326 nvme0n1 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:26.326 23:10:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:26.326 23:10:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:26.326 23:10:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:26.326 23:10:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:26.326 23:10:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@25 -- # sn=703267137 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 703267137 == \7\0\3\2\6\7\1\3\7 ]] 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 703267137 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:26.587 23:10:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:26.847 Running I/O for 1 seconds... 00:42:27.788 24520.00 IOPS, 95.78 MiB/s 00:42:27.788 Latency(us) 00:42:27.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:27.788 nvme0n1 : 1.01 24520.83 95.78 0.00 0.00 5204.84 2703.36 7099.73 00:42:27.788 =================================================================================================================== 00:42:27.788 Total : 24520.83 95.78 0.00 0.00 5204.84 2703.36 7099.73 00:42:27.788 { 00:42:27.788 "results": [ 00:42:27.788 { 00:42:27.788 "job": "nvme0n1", 00:42:27.788 "core_mask": "0x2", 00:42:27.788 "workload": "randread", 00:42:27.788 "status": "finished", 00:42:27.788 "queue_depth": 128, 00:42:27.788 "io_size": 4096, 00:42:27.788 "runtime": 1.005186, 00:42:27.788 "iops": 24520.83494994956, 00:42:27.788 "mibps": 95.78451152324047, 00:42:27.788 "io_failed": 0, 00:42:27.788 "io_timeout": 0, 00:42:27.788 "avg_latency_us": 5204.842271989613, 00:42:27.788 "min_latency_us": 2703.36, 00:42:27.788 "max_latency_us": 7099.733333333334 00:42:27.788 } 00:42:27.788 ], 00:42:27.788 "core_count": 1 00:42:27.788 } 00:42:27.788 23:10:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:27.788 23:10:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:28.049 23:10:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:28.049 23:10:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:28.049 23:10:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:28.049 23:10:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:28.049 23:10:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:28.049 23:10:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.049 23:10:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:28.049 23:10:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:28.049 23:10:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:28.049 23:10:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:28.049 23:10:55 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:28.050 23:10:55 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:28.050 23:10:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:28.310 [2024-09-30 23:10:55.176472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:28.310 [2024-09-30 23:10:55.177247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5990 (107): Transport endpoint is not connected 00:42:28.311 [2024-09-30 23:10:55.178242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5990 (9): Bad file descriptor 00:42:28.311 [2024-09-30 23:10:55.179244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:28.311 [2024-09-30 23:10:55.179251] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:28.311 [2024-09-30 23:10:55.179258] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:28.311 [2024-09-30 23:10:55.179265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:28.311 request: 00:42:28.311 { 00:42:28.311 "name": "nvme0", 00:42:28.311 "trtype": "tcp", 00:42:28.311 "traddr": "127.0.0.1", 00:42:28.311 "adrfam": "ipv4", 00:42:28.311 "trsvcid": "4420", 00:42:28.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:28.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:28.311 "prchk_reftag": false, 00:42:28.311 "prchk_guard": false, 00:42:28.311 "hdgst": false, 00:42:28.311 "ddgst": false, 00:42:28.311 "psk": ":spdk-test:key1", 00:42:28.311 "allow_unrecognized_csi": false, 00:42:28.311 "method": "bdev_nvme_attach_controller", 00:42:28.311 "req_id": 1 00:42:28.311 } 00:42:28.311 Got JSON-RPC error response 00:42:28.311 response: 00:42:28.311 { 00:42:28.311 "code": -5, 00:42:28.311 "message": "Input/output error" 00:42:28.311 } 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@33 -- # sn=703267137 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 703267137 00:42:28.311 1 links removed 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@33 -- # sn=809217006 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 809217006 00:42:28.311 1 links removed 00:42:28.311 23:10:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1046817 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1046817 ']' 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1046817 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046817 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046817' 00:42:28.311 killing process with pid 1046817 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@969 -- # kill 1046817 00:42:28.311 Received shutdown signal, test time was about 1.000000 seconds 00:42:28.311 00:42:28.311 Latency(us) 00:42:28.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.311 =================================================================================================================== 00:42:28.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:28.311 23:10:55 keyring_linux -- common/autotest_common.sh@974 -- # wait 1046817 00:42:28.571 23:10:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1046527 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1046527 ']' 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1046527 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046527 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046527' 00:42:28.571 killing process with pid 1046527 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@969 -- # kill 1046527 00:42:28.571 23:10:55 keyring_linux -- common/autotest_common.sh@974 -- # wait 1046527 00:42:28.832 00:42:28.832 real 0m5.260s 00:42:28.832 user 0m9.784s 00:42:28.832 sys 0m1.463s 00:42:28.832 23:10:55 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:28.832 23:10:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:28.832 ************************************ 00:42:28.832 END TEST keyring_linux 00:42:28.832 ************************************ 00:42:28.832 23:10:55 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:28.832 23:10:55 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:28.832 23:10:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:28.832 23:10:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:28.832 23:10:55 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:28.832 23:10:55 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:28.832 23:10:55 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:28.832 23:10:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:28.832 23:10:55 -- common/autotest_common.sh@10 -- # set +x 00:42:28.832 23:10:55 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:28.832 23:10:55 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:28.832 23:10:55 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:28.832 23:10:55 -- common/autotest_common.sh@10 -- # set +x 00:42:36.979 INFO: APP EXITING 00:42:36.979 INFO: killing all VMs 00:42:36.979 INFO: killing vhost app 00:42:36.979 INFO: EXIT DONE 00:42:40.282 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:40.282 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:40.282 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:40.542 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:40.542 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:40.542 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:44.744 Cleaning 00:42:44.744 Removing: /var/run/dpdk/spdk0/config 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:44.744 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:44.744 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:44.744 Removing: /var/run/dpdk/spdk1/config 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:44.744 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:44.744 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:44.744 Removing: /var/run/dpdk/spdk2/config 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:44.744 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:44.744 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:44.744 Removing: /var/run/dpdk/spdk3/config 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:44.744 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:44.744 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:44.744 Removing: /var/run/dpdk/spdk4/config 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:44.744 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:44.744 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:44.744 Removing: /dev/shm/bdev_svc_trace.1 00:42:44.744 Removing: /dev/shm/nvmf_trace.0 00:42:44.744 Removing: /dev/shm/spdk_tgt_trace.pid465991 00:42:44.744 Removing: /var/run/dpdk/spdk0 00:42:44.744 Removing: /var/run/dpdk/spdk1 00:42:44.744 Removing: /var/run/dpdk/spdk2 00:42:44.744 Removing: /var/run/dpdk/spdk3 00:42:44.744 Removing: /var/run/dpdk/spdk4 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1007139 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1007211 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1013398 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1015598 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1018212 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1019612 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1022374 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1023762 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1033951 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1034616 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1035196 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1038120 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1038614 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1039287 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1044203 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1044230 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1046037 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1046527 00:42:44.744 Removing: /var/run/dpdk/spdk_pid1046817 00:42:44.744 Removing: /var/run/dpdk/spdk_pid464482 00:42:44.744 Removing: /var/run/dpdk/spdk_pid465991 00:42:44.744 Removing: /var/run/dpdk/spdk_pid466824 00:42:44.744 Removing: /var/run/dpdk/spdk_pid467859 00:42:44.745 Removing: /var/run/dpdk/spdk_pid468199 00:42:44.745 Removing: /var/run/dpdk/spdk_pid469271 00:42:44.745 Removing: /var/run/dpdk/spdk_pid469605 00:42:44.745 Removing: /var/run/dpdk/spdk_pid469855 00:42:44.745 Removing: /var/run/dpdk/spdk_pid470876 00:42:44.745 Removing: /var/run/dpdk/spdk_pid471665 00:42:44.745 Removing: /var/run/dpdk/spdk_pid472057 00:42:44.745 Removing: /var/run/dpdk/spdk_pid472453 00:42:44.745 Removing: /var/run/dpdk/spdk_pid472870 00:42:44.745 Removing: /var/run/dpdk/spdk_pid473243 00:42:44.745 Removing: /var/run/dpdk/spdk_pid473403 00:42:44.745 Removing: /var/run/dpdk/spdk_pid473665 00:42:44.745 Removing: /var/run/dpdk/spdk_pid474051 00:42:44.745 Removing: /var/run/dpdk/spdk_pid475408 00:42:44.745 Removing: /var/run/dpdk/spdk_pid478934 00:42:44.745 Removing: /var/run/dpdk/spdk_pid479270 00:42:44.745 Removing: /var/run/dpdk/spdk_pid479625 00:42:44.745 Removing: /var/run/dpdk/spdk_pid479799 00:42:44.745 Removing: /var/run/dpdk/spdk_pid480178 00:42:44.745 Removing: /var/run/dpdk/spdk_pid480509 00:42:44.745 Removing: /var/run/dpdk/spdk_pid480884 00:42:44.745 Removing: /var/run/dpdk/spdk_pid481070 00:42:44.745 Removing: /var/run/dpdk/spdk_pid481338 00:42:44.745 Removing: /var/run/dpdk/spdk_pid481597 00:42:44.745 Removing: /var/run/dpdk/spdk_pid481808 00:42:44.745 Removing: /var/run/dpdk/spdk_pid481974 00:42:44.745 Removing: /var/run/dpdk/spdk_pid482435 00:42:44.745 Removing: /var/run/dpdk/spdk_pid482772 00:42:44.745 Removing: /var/run/dpdk/spdk_pid483178 00:42:44.745 Removing: /var/run/dpdk/spdk_pid487987 00:42:44.745 Removing: /var/run/dpdk/spdk_pid493299 00:42:44.745 Removing: /var/run/dpdk/spdk_pid505598 00:42:44.745 Removing: /var/run/dpdk/spdk_pid506418 00:42:44.745 Removing: /var/run/dpdk/spdk_pid512093 00:42:44.745 Removing: /var/run/dpdk/spdk_pid512580 00:42:44.745 Removing: /var/run/dpdk/spdk_pid517828 00:42:44.745 Removing: /var/run/dpdk/spdk_pid524975 00:42:44.745 Removing: /var/run/dpdk/spdk_pid528284 00:42:44.745 Removing: /var/run/dpdk/spdk_pid541184 00:42:44.745 Removing: /var/run/dpdk/spdk_pid552327 00:42:44.745 Removing: /var/run/dpdk/spdk_pid554570 00:42:44.745 Removing: /var/run/dpdk/spdk_pid555593 00:42:44.745 Removing: /var/run/dpdk/spdk_pid577517 00:42:44.745 Removing: /var/run/dpdk/spdk_pid582494 00:42:44.745 Removing: /var/run/dpdk/spdk_pid640176 00:42:44.745 Removing: /var/run/dpdk/spdk_pid646645 00:42:44.745 Removing: /var/run/dpdk/spdk_pid653933 00:42:44.745 Removing: /var/run/dpdk/spdk_pid661497 00:42:44.745 Removing: /var/run/dpdk/spdk_pid661500 00:42:44.745 Removing: /var/run/dpdk/spdk_pid662504 00:42:44.745 Removing: /var/run/dpdk/spdk_pid663506 00:42:44.745 Removing: /var/run/dpdk/spdk_pid664515 00:42:44.745 Removing: /var/run/dpdk/spdk_pid665186 00:42:44.745 Removing: /var/run/dpdk/spdk_pid665188 00:42:44.745 Removing: /var/run/dpdk/spdk_pid665522 00:42:44.745 Removing: /var/run/dpdk/spdk_pid665535 00:42:44.745 Removing: /var/run/dpdk/spdk_pid665585 00:42:44.745 Removing: /var/run/dpdk/spdk_pid666652 00:42:44.745 Removing: /var/run/dpdk/spdk_pid667792 00:42:44.745 Removing: /var/run/dpdk/spdk_pid668865 00:42:44.745 Removing: /var/run/dpdk/spdk_pid669924 00:42:44.745 Removing: /var/run/dpdk/spdk_pid670036 00:42:44.745 Removing: /var/run/dpdk/spdk_pid670262 00:42:44.745 Removing: /var/run/dpdk/spdk_pid671626 00:42:44.745 Removing: /var/run/dpdk/spdk_pid672961 00:42:44.745 Removing: /var/run/dpdk/spdk_pid683012 00:42:45.005 Removing: /var/run/dpdk/spdk_pid717541 00:42:45.005 Removing: /var/run/dpdk/spdk_pid723242 00:42:45.005 Removing: /var/run/dpdk/spdk_pid725101 00:42:45.005 Removing: /var/run/dpdk/spdk_pid727349 00:42:45.005 Removing: /var/run/dpdk/spdk_pid727689 00:42:45.005 Removing: /var/run/dpdk/spdk_pid728033 00:42:45.005 Removing: /var/run/dpdk/spdk_pid728232 00:42:45.005 Removing: /var/run/dpdk/spdk_pid729058 00:42:45.005 Removing: /var/run/dpdk/spdk_pid731148 00:42:45.005 Removing: /var/run/dpdk/spdk_pid732535 00:42:45.005 Removing: /var/run/dpdk/spdk_pid733091 00:42:45.005 Removing: /var/run/dpdk/spdk_pid735633 00:42:45.005 Removing: /var/run/dpdk/spdk_pid736445 00:42:45.005 Removing: /var/run/dpdk/spdk_pid737372 00:42:45.005 Removing: /var/run/dpdk/spdk_pid742502 00:42:45.005 Removing: /var/run/dpdk/spdk_pid749267 00:42:45.005 Removing: /var/run/dpdk/spdk_pid749268 00:42:45.005 Removing: /var/run/dpdk/spdk_pid749269 00:42:45.005 Removing: /var/run/dpdk/spdk_pid754074 00:42:45.005 Removing: /var/run/dpdk/spdk_pid765105 00:42:45.005 Removing: /var/run/dpdk/spdk_pid770057 00:42:45.005 Removing: /var/run/dpdk/spdk_pid777395 00:42:45.005 Removing: /var/run/dpdk/spdk_pid778903 00:42:45.005 Removing: /var/run/dpdk/spdk_pid780738 00:42:45.005 Removing: /var/run/dpdk/spdk_pid782264 00:42:45.005 Removing: /var/run/dpdk/spdk_pid788234 00:42:45.005 Removing: /var/run/dpdk/spdk_pid793355 00:42:45.005 Removing: /var/run/dpdk/spdk_pid802734 00:42:45.005 Removing: /var/run/dpdk/spdk_pid802745 00:42:45.005 Removing: /var/run/dpdk/spdk_pid808014 00:42:45.005 Removing: /var/run/dpdk/spdk_pid808191 00:42:45.005 Removing: /var/run/dpdk/spdk_pid808526 00:42:45.005 Removing: /var/run/dpdk/spdk_pid808928 00:42:45.006 Removing: /var/run/dpdk/spdk_pid809051 00:42:45.006 Removing: /var/run/dpdk/spdk_pid815212 00:42:45.006 Removing: /var/run/dpdk/spdk_pid815955 00:42:45.006 Removing: /var/run/dpdk/spdk_pid821298 00:42:45.006 Removing: /var/run/dpdk/spdk_pid824638 00:42:45.006 Removing: /var/run/dpdk/spdk_pid831284 00:42:45.006 Removing: /var/run/dpdk/spdk_pid837717 00:42:45.006 Removing: /var/run/dpdk/spdk_pid848198 00:42:45.006 Removing: /var/run/dpdk/spdk_pid856762 00:42:45.006 Removing: /var/run/dpdk/spdk_pid856798 00:42:45.006 Removing: /var/run/dpdk/spdk_pid880730 00:42:45.006 Removing: /var/run/dpdk/spdk_pid881576 00:42:45.006 Removing: /var/run/dpdk/spdk_pid882406 00:42:45.006 Removing: /var/run/dpdk/spdk_pid883096 00:42:45.006 Removing: /var/run/dpdk/spdk_pid884155 00:42:45.006 Removing: /var/run/dpdk/spdk_pid884845 00:42:45.006 Removing: /var/run/dpdk/spdk_pid885557 00:42:45.006 Removing: /var/run/dpdk/spdk_pid886353 00:42:45.006 Removing: /var/run/dpdk/spdk_pid891645 00:42:45.006 Removing: /var/run/dpdk/spdk_pid891988 00:42:45.006 Removing: /var/run/dpdk/spdk_pid899159 00:42:45.006 Removing: /var/run/dpdk/spdk_pid899485 00:42:45.006 Removing: /var/run/dpdk/spdk_pid906059 00:42:45.006 Removing: /var/run/dpdk/spdk_pid911364 00:42:45.006 Removing: /var/run/dpdk/spdk_pid923396 00:42:45.006 Removing: /var/run/dpdk/spdk_pid924073 00:42:45.267 Removing: /var/run/dpdk/spdk_pid929281 00:42:45.267 Removing: /var/run/dpdk/spdk_pid929711 00:42:45.267 Removing: /var/run/dpdk/spdk_pid934834 00:42:45.267 Removing: /var/run/dpdk/spdk_pid941753 00:42:45.267 Removing: /var/run/dpdk/spdk_pid944832 00:42:45.267 Removing: /var/run/dpdk/spdk_pid957138 00:42:45.267 Removing: /var/run/dpdk/spdk_pid968054 00:42:45.267 Removing: /var/run/dpdk/spdk_pid970507 00:42:45.267 Removing: /var/run/dpdk/spdk_pid971514 00:42:45.267 Removing: /var/run/dpdk/spdk_pid991335 00:42:45.267 Removing: /var/run/dpdk/spdk_pid996156 00:42:45.267 Removing: /var/run/dpdk/spdk_pid999474 00:42:45.267 Clean 00:42:45.267 23:11:12 -- common/autotest_common.sh@1451 -- # return 0 00:42:45.267 23:11:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:45.267 23:11:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:45.267 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:42:45.267 23:11:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:45.267 23:11:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:45.267 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:42:45.267 23:11:12 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:45.267 23:11:12 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:45.267 23:11:12 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:45.267 23:11:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:45.267 23:11:12 -- spdk/autotest.sh@394 -- # hostname 00:42:45.267 23:11:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:45.528 geninfo: WARNING: invalid characters removed from testname! 00:43:12.265 23:11:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:14.179 23:11:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:16.721 23:11:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:18.100 23:11:44 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:20.010 23:11:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:21.394 23:11:48 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:23.306 23:11:49 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:23.306 23:11:49 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:43:23.306 23:11:49 -- common/autotest_common.sh@1681 -- $ lcov --version 00:43:23.306 23:11:49 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:43:23.306 23:11:50 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:43:23.306 23:11:50 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:23.306 23:11:50 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:23.306 23:11:50 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:23.306 23:11:50 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:23.306 23:11:50 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:23.306 23:11:50 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:23.306 23:11:50 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:23.306 23:11:50 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:23.306 23:11:50 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:23.306 23:11:50 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:23.306 23:11:50 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:23.306 23:11:50 -- scripts/common.sh@344 -- $ case "$op" in 00:43:23.306 23:11:50 -- scripts/common.sh@345 -- $ : 1 00:43:23.306 23:11:50 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:23.306 23:11:50 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.306 23:11:50 -- scripts/common.sh@365 -- $ decimal 1 00:43:23.306 23:11:50 -- scripts/common.sh@353 -- $ local d=1 00:43:23.306 23:11:50 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:23.306 23:11:50 -- scripts/common.sh@355 -- $ echo 1 00:43:23.306 23:11:50 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:23.306 23:11:50 -- scripts/common.sh@366 -- $ decimal 2 00:43:23.306 23:11:50 -- scripts/common.sh@353 -- $ local d=2 00:43:23.306 23:11:50 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:23.306 23:11:50 -- scripts/common.sh@355 -- $ echo 2 00:43:23.306 23:11:50 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:23.306 23:11:50 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:23.306 23:11:50 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:23.306 23:11:50 -- scripts/common.sh@368 -- $ return 0 00:43:23.306 23:11:50 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:23.306 23:11:50 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:43:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.306 --rc genhtml_branch_coverage=1 00:43:23.306 --rc genhtml_function_coverage=1 00:43:23.306 --rc genhtml_legend=1 00:43:23.306 --rc geninfo_all_blocks=1 00:43:23.306 --rc geninfo_unexecuted_blocks=1 00:43:23.306 00:43:23.306 ' 00:43:23.306 23:11:50 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:43:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.306 --rc genhtml_branch_coverage=1 00:43:23.306 --rc genhtml_function_coverage=1 00:43:23.306 --rc genhtml_legend=1 00:43:23.306 --rc geninfo_all_blocks=1 00:43:23.306 --rc geninfo_unexecuted_blocks=1 00:43:23.306 00:43:23.306 ' 00:43:23.306 23:11:50 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:43:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.306 --rc genhtml_branch_coverage=1 00:43:23.306 --rc genhtml_function_coverage=1 00:43:23.306 --rc genhtml_legend=1 00:43:23.306 --rc geninfo_all_blocks=1 00:43:23.306 --rc geninfo_unexecuted_blocks=1 00:43:23.306 00:43:23.306 ' 00:43:23.306 23:11:50 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:43:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.306 --rc genhtml_branch_coverage=1 00:43:23.306 --rc genhtml_function_coverage=1 00:43:23.306 --rc genhtml_legend=1 00:43:23.306 --rc geninfo_all_blocks=1 00:43:23.306 --rc geninfo_unexecuted_blocks=1 00:43:23.306 00:43:23.306 ' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:23.306 23:11:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:23.306 23:11:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:23.306 23:11:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:23.306 23:11:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:23.306 23:11:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.306 23:11:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.306 23:11:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.306 23:11:50 -- paths/export.sh@5 -- $ export PATH 00:43:23.306 23:11:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.306 23:11:50 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:23.306 23:11:50 -- common/autobuild_common.sh@479 -- $ date +%s 00:43:23.306 23:11:50 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727730710.XXXXXX 00:43:23.306 23:11:50 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727730710.QdNAhR 00:43:23.306 23:11:50 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:43:23.306 23:11:50 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@495 -- $ get_config_params 00:43:23.306 23:11:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:23.306 23:11:50 -- common/autotest_common.sh@10 -- $ set +x 00:43:23.306 23:11:50 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:43:23.306 23:11:50 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:43:23.306 23:11:50 -- pm/common@17 -- $ local monitor 00:43:23.306 23:11:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:23.306 23:11:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:23.306 23:11:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:23.307 23:11:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:23.307 23:11:50 -- pm/common@21 -- $ date +%s 00:43:23.307 23:11:50 -- pm/common@25 -- $ sleep 1 00:43:23.307 23:11:50 -- pm/common@21 -- $ date +%s 00:43:23.307 23:11:50 -- pm/common@21 -- $ date +%s 00:43:23.307 23:11:50 -- pm/common@21 -- $ date +%s 00:43:23.307 23:11:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727730710 00:43:23.307 23:11:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727730710 00:43:23.307 23:11:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727730710 00:43:23.307 23:11:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727730710 00:43:23.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727730710_collect-cpu-load.pm.log 00:43:23.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727730710_collect-cpu-temp.pm.log 00:43:23.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727730710_collect-vmstat.pm.log 00:43:23.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727730710_collect-bmc-pm.bmc.pm.log 00:43:24.248 23:11:51 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:43:24.248 23:11:51 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:24.248 23:11:51 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:24.248 23:11:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:24.248 23:11:51 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:24.248 23:11:51 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:24.248 23:11:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:24.248 23:11:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:24.248 23:11:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:24.248 23:11:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:24.248 23:11:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:24.248 23:11:51 -- pm/common@44 -- $ pid=1059908 00:43:24.248 23:11:51 -- pm/common@50 -- $ kill -TERM 1059908 00:43:24.248 23:11:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:24.248 23:11:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:24.248 23:11:51 -- pm/common@44 -- $ pid=1059909 00:43:24.248 23:11:51 -- pm/common@50 -- $ kill -TERM 1059909 00:43:24.248 23:11:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:24.248 23:11:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:24.248 23:11:51 -- pm/common@44 -- $ pid=1059911 00:43:24.248 23:11:51 -- pm/common@50 -- $ kill -TERM 1059911 00:43:24.248 23:11:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:24.248 23:11:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:24.248 23:11:51 -- pm/common@44 -- $ pid=1059937 00:43:24.248 23:11:51 -- pm/common@50 -- $ sudo -E kill -TERM 1059937 00:43:24.248 + [[ -n 378983 ]] 00:43:24.248 + sudo kill 378983 00:43:24.259 [Pipeline] } 00:43:24.273 [Pipeline] // stage 00:43:24.277 [Pipeline] } 00:43:24.290 [Pipeline] // timeout 00:43:24.294 [Pipeline] } 00:43:24.307 [Pipeline] // catchError 00:43:24.312 [Pipeline] } 00:43:24.324 [Pipeline] // wrap 00:43:24.329 [Pipeline] } 00:43:24.342 [Pipeline] // catchError 00:43:24.349 [Pipeline] stage 00:43:24.351 [Pipeline] { (Epilogue) 00:43:24.363 [Pipeline] catchError 00:43:24.365 [Pipeline] { 00:43:24.377 [Pipeline] echo 00:43:24.379 Cleanup processes 00:43:24.384 [Pipeline] sh 00:43:24.674 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:24.674 1060072 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:24.674 1060606 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:24.688 [Pipeline] sh 00:43:24.976 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:24.976 ++ grep -v 'sudo pgrep' 00:43:24.976 ++ awk '{print $1}' 00:43:24.976 + sudo kill -9 1060072 00:43:24.988 [Pipeline] sh 00:43:25.278 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:37.521 [Pipeline] sh 00:43:37.810 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:37.810 Artifacts sizes are good 00:43:37.825 [Pipeline] archiveArtifacts 00:43:37.831 Archiving artifacts 00:43:38.017 [Pipeline] sh 00:43:38.303 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:38.317 [Pipeline] cleanWs 00:43:38.327 [WS-CLEANUP] Deleting project workspace... 00:43:38.327 [WS-CLEANUP] Deferred wipeout is used... 00:43:38.334 [WS-CLEANUP] done 00:43:38.335 [Pipeline] } 00:43:38.351 [Pipeline] // catchError 00:43:38.361 [Pipeline] sh 00:43:38.672 + logger -p user.info -t JENKINS-CI 00:43:38.683 [Pipeline] } 00:43:38.695 [Pipeline] // stage 00:43:38.700 [Pipeline] } 00:43:38.714 [Pipeline] // node 00:43:38.719 [Pipeline] End of Pipeline 00:43:38.754 Finished: SUCCESS